The present disclosure relates to gaming applications, and, in particular, to saving information in gaming applications.
The present disclosure may include material that is subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent disclosure as it appears in the patent files or records of the United States Patent and Trademark Office (USPTO), but otherwise reserves all rights under copyright law.
A gaming application may benefit from the capability to save a game in progress. This enables the user to temporarily stop using the gaming application and later resume the gaming application without losing progress. In some gaming applications, the capability to save a game in progress is tightly tied to a particular gaming application, such that systems that are used to save a game in progress in one gaming application may not be usable to provide similar functionality in a different gaming application.
In accordance with some implementations, a method is performed at an electronic device with one or more processors and a non-transitory memory. The method includes tracking a state of a running game application. Changes to the game state may be pushed to a game state subsystem. A user may activate an affordance in a user interface to save the game. In response to the affordance being activated, the game state subsystem may determine whether all changes of the game state have been received by the game state subsystem. If so, the game state subsystem may store the game state to a memory, e.g., the non-transitory memory.
In accordance with some implementations, a method is performed at an electronic device with one or more processors, a display, and a non-transitory memory. The method includes obtaining a gaming application of a device. While executing the gaming application, a user interface that includes a save affordance may be overlaid on the gaming application. In response to detecting a user input directed to the save affordance, state information for the gaming application may be determined. A first portion of the state information may be stored to a data structure during execution of the gaming application and may indicate respective statuses of one or more entities associated with the gaming application that have changed since a previous save. The first portion of the state information that indicates the respective changed statuses of the one or more entities as a representation of the gaming progress of the user may be saved in a non-transitory memory.
In accordance with some implementations, an electronic device may include one or more processors, a non-transitory memory, a display, and one or more programs. The one or more programs may be stored in the non-transitory memory and may be configured to be executed by the one or more processors. The one or more programs may include instructions for obtaining a gaming application of the electronic device. While executing the gaming application, a user interface that includes a save affordance may be overlaid on the gaming application. In response to detecting a user input directed to the save affordance, state information may be determined for the gaming application. A first portion of the state information may indicate respective statuses of one or more entities associated with the gaming application that have changed since a previous save. The first portion of the state information that indicates the respective changed statuses of the one or more entities may be saved in the non-transitory memory as a representation of the gaming progress of the user.
For a better understanding of the various described implementations, reference should be made to the Description, below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
In accordance with common practice, the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method, or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.
Reference will now be made in detail to implementations, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described implementations. However, it will be apparent to one of ordinary skill in the art that the various described implementations may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the implementations.
It will also be understood that, although the terms first, second, etc. are, in some instances, used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, without departing from the scope of the various described implementations. The first contact and the second contact are both contacts, but they are not the same contact, unless the context clearly indicates otherwise.
The terminology used in the description of the various described implementations herein is for the purpose of describing particular implementations only and is not intended to be limiting. As used in the description of the various described implementations and the appended claims, the singular forms “a,” “an,” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
As used herein, the term “if” is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event],” depending on the context.
A person can interact with and/or sense a physical environment or physical world without the aid of an electronic device. A physical environment can include physical features, such as a physical object or surface. An example of a physical environment is physical forest that includes physical plants and animals. A person can directly sense and/or interact with a physical environment through various means, such as hearing, sight, taste, touch, and smell. In contrast, a person can use an electronic device to interact with and/or sense an extended reality (XR) environment that is wholly or partially simulated. The XR environment can include mixed reality (MR) content, augmented reality (AR) content, virtual reality (VR) content, and/or the like. With an XR system, some of a person's physical motions, or representations thereof, can be tracked and, in response, characteristics of virtual objects simulated in the XR environment can be adjusted in a manner that complies with at least one law of physics. For instance, the XR system can detect the movement of a user's head and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment. In another example, the XR system can detect movement of an electronic device that presents the XR environment (e.g., a mobile phone, tablet, laptop, or the like) and adjust graphical content and auditory content presented to the user similar to how such views and sounds would change in a physical environment. In some situations, the XR system can adjust characteristic(s) of graphical content in response to other inputs, such as a representation of a physical motion (e.g., a vocal command).
Many different types of electronic systems can enable a user to interact with and/or sense an XR environment. A non-exclusive list of examples include heads-up displays (HUDs), head mountable systems, projection-based systems, windows or vehicle windshields having integrated display capability, displays formed as lenses to be placed on users' eyes (e.g., contact lenses), headphones/earphones, input systems with or without haptic feedback (e.g., wearable or handheld controllers), speaker arrays, smartphones, tablets, and desktop/laptop computers. A head mountable system can have one or more speaker(s) and an opaque display. Other head mountable systems can be configured to accept an opaque external display (e.g., a smartphone). The head mountable system can include one or more image sensors to capture images/video of the physical environment and/or one or more microphones to capture audio of the physical environment. A head mountable system may have a transparent or translucent display, rather than an opaque display. The transparent or translucent display can have a medium through which light is directed to a user's eyes. The display may utilize various display technologies, such as uLEDs, OLEDs, LEDs, liquid crystal on silicon, laser scanning light source, digital light projection, or combinations thereof. An optical waveguide, an optical reflector, a hologram medium, an optical combiner, combinations thereof, or other similar technologies can be used for the medium. In some implementations, the transparent or translucent display can be selectively controlled to become opaque. Projection-based systems can utilize retinal projection technology that projects images onto users' retinas. Projection systems can also project virtual objects into the physical environment (e.g., as a hologram or onto a physical surface).
In the example of
As illustrated in
In some implementations, the XR environment 106 includes a virtual environment that is a simulated replacement of a physical environment. In some implementations, the XR environment 106 is synthesized by the electronic device 100. In such implementations, the XR environment 106 is different from a physical environment in which the electronic device 100 is located. In some implementations, the XR environment 106 includes an augmented environment that is a modified version of a physical environment. For example, in some implementations, the electronic device 100 modifies (e.g., augments) the physical environment in which the electronic device 100 is located to generate the XR environment 106 (e.g., by displaying virtual content overlaid on a video pass-through representation of the physical environment using an opaque display or on a view of the physical environment through an at least partially transparent display). In some implementations, the electronic device 100 generates the XR environment 106 by simulating a replica of the physical environment in which the electronic device 100 is located. In some implementations, the electronic device 100 generates the XR environment 106 by removing and/or adding items from the simulated replica of the physical environment in which the electronic device 100 is located. In some implementations, the XR environment 106 represents a communication session between the electronic device 100 and another electronic device. For example, the XR environment 106 may correspond to a video call between the electronic device 100 and the other electronic device.
In some implementations, the XR environment 106 includes various virtual objects such as an XR object 110 (“object 110”, hereinafter for the sake of brevity). In some implementations, the XR environment 106 includes multiple objects, such as XR objects 112, 114, and 116. In some implementations, the virtual objects are referred to as graphical objects or XR objects. In various implementations, the electronic device 100 obtains the objects from an object datastore (not shown). For example, in some implementations, the first electronic device 100 retrieves the object 110 from the object datastore. In some implementations, the virtual objects represent physical articles. For example, in some implementations, the virtual objects represent equipment (e.g., machinery such as planes, tanks, robots, motorcycles, etc.). In some implementations, the virtual objects represent fictional elements (e.g., entities from fictional materials, for example, an action figure or a fictional equipment such as a flying motorcycle). In some implementations, the virtual objects represent entities in a gaming environment, such as pieces or a board in a board game or characters or objects in a role-playing game.
In various implementations, the electronic device 100 (e.g., the game system 200) tracks a state of a game application as the game application is running Changes to the game state may be pushed to a game state subsystem (not shown in
The game state may include presentation information that is used to present the game, e.g., render the game. For example, the presentation information may include texture maps. The presentation information may be generated, for example, based on the entity status information and/or based on metadata associated with the game. For example, the electronic device 100 may generate a rendering of a character entity based on a description of the character entity and/or based on a texture map of the character entity that is stored as metadata. In some implementations, the presentation information may not be saved each time the game state is saved.
As shown in
In response to the affordance 150 being activated, the game state subsystem may determine whether all of the changes to the game state have been received by the game state subsystem. If so, the game state may be stored to a memory (not shown in
In various implementations, the game system 200 or portions thereof are included in a device (e.g., the electronic device 100) enabled with a data obtainer 210 to obtain a gaming application 212 of the device. The CPU(s) may execute the gaming application 212 and may present the game to a user using a game presenter subsystem 220. For example, for a game of chess, the game presenter subsystem 220 may obtain three-dimensional (3D) models of the chess pieces, the shaders that are used to render the chess pieces, and information regarding where the pieces are positioned in the XR environment 106.
In some implementations, while the gaming application 212 is executing, the gaming application 212 may cause a user interface to be displayed in the XR environment 106, e.g., on a display 222. The user interface may include an affordance, such as the affordance 150 shown in
In some implementations, in response to detecting the user input 232, a state subsystem 240 may determine state information 242 for the gaming application 212. The state information 242 may include a first portion that indicates respective statuses of one or more entities associated with the gaming application that have changed since a previous save. For example, for a board game application, the first portion of the state information 242 may indicate the positions of various pieces on the board, which player has the current turn, etc. As another example, for a role-playing game application, the first portion of the state information 242 may indicate one or more actions that a character in the game is enabled to perform. In some implementations, the first portion of the state information includes one or more resource counters, such as a health counter, an energy counter, and/or an ammunition counter. In some implementations, the state subsystem 240 may select one or more entities for which status information is included or excluded from the state information 242.
The first portion of the state information may be stored to a data structure during execution of the gaming application 212. In some implementations, the first portion of the state information is stored periodically, e.g., at defined time intervals. In some implementations, the first portion of the state information is stored in response to an event, such as an entity moving or a change in a status of an entity.
In some implementations, the state subsystem 240 saves the first portion of the state information that indicates the respective changed statuses of the entity or entities in a non-transitory memory 244 as a representation of the gaming progress of the user. The state subsystem 240 may forgo saving a second portion of the state information. This second portion may include presentation information that is used to present the gaming application 212 using the display 222. The game state may include presentation information that is used to present the game, e.g., render the game. Because the presentation information may not change frequently during gameplay, the presentation information may not be stored in the non-transitory memory 244. Forgoing storage of the second portion of the state information may conserve storage resources and improve performance.
Briefly, the method 300 includes obtaining a gaming application of a device. While the gaming application is executed, a user interface that includes a save affordance may be overlaid on the gaming application. In response to detecting a user input directed to the save affordance, state information for the gaming application may be determined. A first portion of the state information may indicate respective statuses of one or more entities associated with the gaming application that have changed since a previous save. The first portion of the state information that indicates the respective changed statuses of the one or more entities as a representation of the gaming progress of the user may be saved in a non-transitory memory.
In various implementations, as represented by block 310, the method 300 includes obtaining a gaming application of the device. The gaming application may be obtained, for example, from a datastore via a wired or wireless network connection. In some implementations, the gaming application may be obtained via a physical medium, such as a memory device or an optical disc. The device may execute the gaming application 212 and may present the game to a user, for example, using a display. In some implementations, the device may obtain rendering information for use during runtime in rendering an environment associated with the gaming application. For example, for a game of chess, the device may obtain three-dimensional (3D) models of the chess pieces, the shaders that are used to render the chess pieces, and information regarding initial locations of the pieces in an XR environment.
In various implementations, as represented by block 320, the method 300 includes overlaying on the gaming application a user interface that includes a save affordance while executing the gaming application. For example, the save affordance may appear as a button in the field of view of the user. In some implementations, the save affordance may appear as an option in a menu. The method 300 may include detecting a user input directed to the save affordance. For example, a gaze input may be detected using user-facing cameras. A save operation may be initiated in response to determining that the user's gaze is directed to the save affordance. As another example, a world-facing camera may be used to detect a gesture performed by the user. The user may perform the gesture using an extremity or an auxiliary pointing device, such as a mouse or a stylus. In some implementations, the auxiliary pointing device communicates the user input to the device, e.g., by a wired or wireless connection.
In various implementations, as represented by block 330, the method 300 includes determining state information for the gaming application in response to detecting a user input directed to the save affordance. The state information may be accumulated during runtime and may be mirrored to a game state representation during a mirroring phase. In some implementations, the runtime state may be mirrored every frame. In some implementations, the runtime state may be mirrored before a save operation.
In some implementations, in a mirroring phase, objects may be created to represent entities in the game environment. The objects may be moved or deleted as the corresponding entities move.
In some implementations, manual mirroring may be performed. A game designer may select the intervals at which data is mirrored to a game state. For example, the complete state may be mirrored every frame. As another example, the complete state may be mirrored before a save operation. As another example, each object may be updated in the game state as the corresponding object changes in the runtime. For example, every time a piece on a chess board is moved, a corresponding game state object may be updated with new position information.
In some implementations, the user input may be used to specify one or more entities to include or exclude from the state information. For example, if a player leaves a multiplayer gaming application and their status is no longer relevant to the game, the save affordance may include an option to exclude that player's corresponding entity or entities in the game from being included in the state information.
A first portion of the state information may indicate respective statuses of one or more entities associated with the gaming application that have changed since a previous save. In some implementations, as represented by block 330a, the status of an entity may include a position of the entity in an environment associated with the gaming application. For example, in a role-playing game application, the status of a character entity may include a current location of the character in the game world. The location may be represented as coordinates in the game world (e.g., 3D coordinates) or as a descriptive location (e.g., “entrance of dungeon ABC”). In some implementations, as represented by block 330b, the status of an entity may include a position of the entity on a game board. The position may be associated with a position identifier. For example, in a chess application, the position of the entity may be represented using algebraic notation.
In some implementations, as represented by block 330c, the status of an entity may include a type of the entity. For example, some board games include pieces of different types; e.g., chess includes pawns, knights, bishops, castles, queens, and kings. As another example role-playing games include different types of characters, including player characters and non-player characters, and there are different types of non-player characters, such as different types of enemies.
In some implementations, an entity may be identified by a unique identifier that identifies the entity as differentiated from all other objects in the environment associated with the gaming application. The identifier may be assigned when an entity is created. The entity may have a number of constituent structures known as structs. Each struct may be associated with a struct type. Each struct type may have a fixed size and a number of constituent members. Each member may be associated with a name, a member type, and a fixed offset in a corresponding data section of the non-transitory memory. The data section may be a binary blob that represents the value of each member.
In some implementations, as represented by block 330d, the status of an entity may include an action that is performable by the entity. For example, different types of pieces in the game of chess can move in different ways. As another example, different types of characters in a role-playing game may be capable of different actions. For example, while a monster character may be able to attack, a villager character may lack this ability.
In some implementations, as represented by block 330e, the status of an entity may include a resource counter. Some examples of resource counters may include health (e.g., hit point) and energy (e.g., stamina or mana) counters, ammunition counters, wealth counters, and experience counters.
In some implementations, as represented by block 330f, the status of an entity may include a turn indicator. The turn indicator may indicate which player has a current turn in the game. For example, the status may be indicated as a binary value of 0 (not the player's turn) or 1 (the player's turn) or as a text value (e.g., “white”).
In various implementations, as represented by block 340, the method 300 includes saving, in a non-transitory memory, the first portion of the state information that indicates the respective changed statuses of the one or more entities as a representation of the gaming progress of the user. Saving the state information may involve serializing the state information to a binary buffer, e.g., writing the data for each object to the buffer. In some implementations, e.g., multiplayer implementations, runtime changes to the state information may be serialized and sent over a network to keep multiple clients in synchronization with each other.
In some implementations, as represented by block 340a, the device may forgo saving a second portion of the state information. As represented by block 340b, this second portion may include presentation information that is used to present the gaming application using a display. The game state may include presentation information that is used to present the game, e.g., render the game. Because the presentation information may not change frequently during gameplay, the presentation information may not be stored in the non-transitory memory. Forgoing storage of the second portion of the state information may conserve storage resources and improve performance.
In some implementations, the communication interface 408 is provided to, among other uses, establish and maintain a metadata tunnel between a cloud hosted network management system and at least one private network including one or more compliant devices. In some implementations, the one or more communication buses 405 include circuitry that interconnects and controls communications between system components. The memory 404 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. The memory 404 optionally includes one or more storage devices remotely located from the one or more CPUs 402. The memory 404 comprises a non-transitory computer readable storage medium.
In some implementations, the memory 404 or the non-transitory computer readable storage medium of the memory 404 stores the following programs, modules and data structures, or a subset thereof including an optional operating system 430, the data obtainer 210, the game presenter subsystem 220, the input obtainer 230, and the state subsystem 240. In various implementations, the device 400 performs the method 300 shown in
In some implementations, the data obtainer 210 includes instructions 210a and heuristics and metadata 210b for obtaining a gaming application of the device. In some implementations, the game presenter subsystem 220 presents a game to a user, for example, using a display and/or an audio output device. The game presenter subsystem 220 may obtain information involved in rendering entities in the game. For example, for a game of chess, the game presenter subsystem 220 may obtain three-dimensional (3D) models of the chess pieces, the shaders that are used to render the chess pieces, and information regarding where the pieces are positioned in the XR environment 106. To that end, the game presenter subsystem 220 includes instructions 220a and heuristics and metadata 220b.
In some implementations, the input obtainer 230 causes the second text representation to be displayed on a display by compositing the second text representation onto video data representing the video content of a communication session to generate video output data that is output to the display. To that end, the text compositor 230 includes instructions 230a and heuristics and metadata 230b.
In some implementations, the state subsystem 240 determine state information for the gaming application. To that end, the state subsystem 240 includes instructions 240a and heuristics and metadata 240b.
In some implementations, the one or more I/O devices 406 include a user-facing image sensor (e.g., a front-facing camera) and/or a scene-facing image sensor (e.g., a rear-facing camera). In some implementations, the one or more I/O devices 406 include one or more head position sensors that sense the position and/or motion of the head of the user. In some implementations, the one or more I/O devices 406 include a display for displaying the graphical environment (e.g., for displaying the XR environment 106 shown in
In various implementations, the one or more I/O devices 406 include a video pass-through display which displays at least a portion of a physical environment surrounding the device 400 as an image captured by a scene camera. In various implementations, the one or more I/O devices 406 include an optical see-through display which is at least partially transparent and passes light emitted by or reflected off the physical environment.
It will be appreciated that
While various aspects of implementations within the scope of the appended claims are described above, it should be apparent that the various features of implementations described above may be embodied in a wide variety of forms and that any specific structure and/or function described above is merely illustrative. Based on the present disclosure one skilled in the art should appreciate that an aspect described herein may be implemented independently of any other aspects and that two or more of these aspects may be combined in various ways. For example, an apparatus may be implemented and/or a method may be practiced using any number of the aspects set forth herein. In addition, such an apparatus may be implemented and/or such a method may be practiced using other structure and/or functionality in addition to or other than one or more of the aspects set forth herein.
This application claims priority to U.S. Provisional Patent App. No. 63/347,408, filed on May 31, 2022, the disclosure of which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63347408 | May 2022 | US |