The present disclosure relates generally to augmented reality (AR) scenes, and more particularly to methods and systems for augmenting voice output of virtual objects in AR scenes based on an acoustics profile of a real-world space.
Augmented reality (AR) technology has seen unprecedented growth over the years and is expected to continue growing at a compound annual growth rate. AR technology is an interactive three-dimensional (3D) experience that combines a view of the real-world with computer-generated elements (e.g., virtual objects) in real-time. In AR simulations, the real-world is infused with virtual objects and provides an interactive experience. With the rise in popularity of AR technology, various industries have implemented AR technology to enhance the user experience. Some of the industries include, for example, the video game industry, entertainment, and social media.
For example, a growing trend in the video game industry is to improve the gaming experiencing of users by enhancing the audio in video games so that the gaming experience can be elevated in several ways such as by providing situational awareness, creating a three-dimensional audio perception experience, creating a visceral emotional response, intensifying gameplay actions, etc. Unfortunately, some AR users may find that current AR technology that is used in gaming is limited and may not provide AR users with an immersive AR experience when interacting with virtual characters and virtual objects in the AR environment. Consequently, an AR user may be missing an entire dimension of an engaging gaming experience.
It is in this context that implementations of the disclosure arise.
Implements of the present disclosure include methods, systems, and devices relating to augmenting voice output of a virtual object in an augmented reality (AR) scene. In some embodiments, methods are disclosed that enable augmenting the voice output of virtual objects (e.g., virtual characters) in an AR scene where the voice output is augmented based on the acoustic profile of a real-world space. For example, a user may be physically located in their living room and wearing AR goggles (e.g., AR head mounted display) to interact in an AR environment. While immersed in the AR environment that includes both real-world objects and virtual objects, the virtual objects (e.g., virtual characters, virtual pet, virtual furniture, virtual toys, etc.) may generate voice outputs and sound outputs while interacting in the AR scene. To enhance the sound output of the virtual objects so that it sounds more realistic to the AR user, the system may be configured to process the sound output based on the acoustics profile of the living room.
In one embodiment, the system is configured to identify an acoustics profile associated with the real-world space of the AR user. Since the real-world space of the AR user may be different each time AR user initiates a session to engage with an AR scene, the acoustics profile may include different acoustic characteristics and depend on the location of the real-world space and the real-world objects that are present. Accordingly, the methods disclosed herein outline ways of augmenting the sound output of virtual objects based on the acoustics profile of the real-world space. In this way, the sound output of the virtual objects may sound more realistic to the AR user in his or her real-world space as if the virtual objects are physically present in the same real-world space.
In some embodiments, the augmented sound output of the virtual objects can be audible via a device of the AR user (e.g., head phones or earbuds), via a local speaker in the real-world space, or via a surround sound system (e.g., 5.1-channel surround sound configuration, 7.1-channel surround sound configuration, etc.) that is present in the real-world space. In other embodiments, specific sound sources that are audible by the AR user can be eliminated and selectively removed based on the preferences of the AR user. For instance, if children are located in the real-world living room of the AR user, sound produced by the children can be removed and inaudible to the AR user. In other embodiments, sound components produced by specific virtual objects (e.g., barking virtual dog) can be removed so that it is inaudible to the AR user. In one embodiment, sound components originating from specific regions in the real-world space can be removed so that it is inaudible to the AR user. In this way, specific sound components can be selectively removed based on the preferences of the AR user to provide the AR user with a customized AR experience and to allow the AR user to be fully immersed in the AR environment.
In one embodiment, a method for augmenting voice output of a virtual character in an augmented reality (AR) scene is provided. The method includes examining, by a server, the AR scene, said AR scene includes a real-world space and the virtual character overlaid into the real-world space at a location, the real-world space includes a plurality of real-world objects present in the real-world space. The method includes processing, by the server, to identify an acoustics profile associated with the real-world space, said acoustics profile including reflective sound and absorbed sound associated with real-world objects proximate to the location of the virtual character. The method includes processing, by the server, the voice output by the virtual character while interacting in the AR scene; the processing is configured to augment the voice output based on the acoustics profile of the real-world space, the augmented voice output being audible by an AR user viewing the virtual character in the real-world space. In this way, when the voice output of the virtual character is augmented, the augmented voice output may sound more realistic to the AR user as if the virtual character is physically present in the same real-world space as the AR user.
In another embodiment, a system for augmenting sound output of a virtual object in an augmented reality (AR) scene is provided. The system includes an AR head mounted display (HMD), said AR HMD includes a display for rendering the AR scene. In one embodiment, said AR scene includes a real-world space and the virtual object overlaid into the real-world space at a location, the real-world space includes a plurality of real-world objects present in the real-world space. The system includes a processing unit associated with the AR HMD for identifying an acoustics profile associated with the real-world space, said acoustics profile including reflective sound and absorbed sound associated with real-world objects proximate to the location of the virtual object. In one embodiment, the processing unit is configured to process the sound output by the virtual object while interacting in the AR scene, said processing unit is configured to augment the sound output based on the acoustics profile of the real-world space; the augmented sound output being audible by an AR user viewing the virtual object in the real-world space.
Other aspects and advantages of the disclosure will become apparent from the following detailed description, taken in conjunction with the accompanying drawings, illustrating by way of example the principles of the disclosure.
The disclosure may be better understood by reference to the following description taken in conjunction with the accompanying drawings in which:
The following implementations of the present disclosure provide methods, systems, and devices for augmenting voice output of a virtual character in an augmented reality (AR) scene for an AR user interacting in an AR environment. In one embodiment, the voice output by the virtual character can be augmented based on an acoustics profile of the real-world space where the AR user is present. In some embodiments, the acoustics profile of the real-world space may vary and have acoustic characteristics (e.g., reflective sound, absorbed sound, etc.) that are based on the location of the real-world space and the real-world objects that are present in the real-world space. Accordingly, the system is configured to identify the acoustics profile of the real-world space where a given AR user is physically located and to augment the voice output of the virtual characters based on the identified acoustics profile.
For example, an AR user may be interacting with an AR scene that includes the AR user physically located in a real-world living room while watching a sporting event on television. While watching the sporting event, virtual characters can be rendered in the AR scene so that the AR user and virtual characters can watch the event together. As the virtual characters and the AR user converse with one another, the system is configured to identify an acoustic profile of the living room and to augment the voice output of the virtual characters which can be audible to the AR user in substantial real-time. Accordingly, as the voice output of the virtual characters are augmented and delivered to the AR user, this enables an enhanced and improved AR experience for the AR user since the augmented voice output of the virtual characters may sound more realistic as if the virtual characters are physically present in the same real-world space as the AR user. This allows the AR user to have a more engaging and intimate AR experience with friends who may appear in the real-world space as virtual characters even though they may be physically located hundreds of miles away. In turn, this can enhance the AR experience for AR users who desire to have realistic social interactions with virtual objects and virtual characters.
By way of example, in one embodiment, a method is disclosed that enables augmenting voice output of a virtual character in an AR scene. The method includes examining, by a server, the AR scene, the AR scene includes a real-world space and the virtual character overlaid into the real-world space at a location. In one example, the real-world space includes a plurality of real-world objects present in the real-world space. In one embodiment, the method may further include processing, by the server, to identify an acoustics profile associated with the real-world space. In one example, the acoustics profile includes reflective sound and absorbed sound associated with real-world objects proximate to the location of the virtual character. In another embodiment, the method may include processing, by the server, the voice output by the virtual character while interacting in the AR scene. In one example, the processing of the voice output is configured to augment the voice output based on the acoustics profile of the real-world space. The augmented voice output can be audible by an AR user viewing the virtual character in the real-world space.
In accordance with one embodiment, a system is disclosed for augmenting sound output (e.g., voice output) of virtual objects (e.g., virtual characters) that are present in an AR scene. For example, a user may be using AR head mounted display (e.g., AR goggles. AR glasses, etc.) to interact in an AR environment which includes various AR scenes generated by a cloud computing and gaming system. While viewing and interacting with the AR scenes through the display of the AR HMD, the system is configured to analyze the field of view (FOV) into the AR scene and to examine the real-world space to identify real-world objects that may be present in the real-world space. In one embodiment, the system is configured to identify an acoustics profile associated with the real-world space which may include reflective sound and absorbed sound associated with the real-world objects. In some embodiments, if the AR scene includes virtual characters that produce voice output, the system is configured to augment the voice output based on the acoustics profile of the real-world space. In this way, the augmented voice output may sound more realistic and provide the AR user with an enhanced and improved AR experience.
With the above overview in mind, the following provides several example figures to facilitate understanding of the example embodiments.
As illustrated in
In some embodiments, the AR HMD 102 may include an externally facing camera that is configured to capture images of the real-world space 105 of the user 100 such as real-world objects 110 that may be located in the real-world space 105 of the user. In some embodiments, the images captured by the externally facing camera can be analyzed to determine the location/orientation of the real-world objects 110 relative to the AR HMD 102. Using the known location/orientation of the AR HMD 102, the real-world objects, and inertial sensor data from the AR HMD, the physical actions and movements of the user can be continuously monitored and tracked during the user’s interaction. In some embodiments, the externally facing camera can be an RGB-Depth sensing camera or a three-dimensional (3D) camera which includes depth sensing and texture sensing so that 3D models can be created. The RGB-Depth sensing camera can provide both color and dense depth images which can facilitate 3D mapping of the captured images. For example, the externally facing is configured to analyze the depth and texture of a real-world object such as coffee table that may be present in the real-world space of the user. Using the depth and texture data of the coffee table, the material and acoustic properties of the coffee table can be further determined. In other embodiments, the externally facing is configured to analyze the depth and texture of other real-world objects such as the walls, floors, carpet, etc. and their respective acoustic properties.
In some embodiments, the AR HMD 102 may provide a user with a field of view (FOV) 118 into the AR scene 104. Accordingly, as the user 100 turns their head and looks toward different regions within the real-world space 105, the AR scene is updated to include any additional virtual objects 106 and real-world objects 110 that may be within the FOV 118 of the user 100. In one embodiment, the AR HMD 102 may include a gaze tracking camera that is configured to capture images of the eyes of the user 100 to determine the gaze direction of the user 100 and the specific virtual objects 106 or real-world objects 110 that the user 100 is focused on. Accordingly, based on the FOV 118 and the gaze direction of the user 100, the system may detect specific objects that the user may be focused on, e.g., virtual objects, furniture, television, floors, walls, etc.
In the illustrated implementation, the AR HMD 102 is wirelessly connected to a cloud computing and gaming system 116 over a network 114. In one embodiment, the cloud computing and gaming system 116 maintains and executes the AR scenes and video game played by the user 100. In some embodiments, the cloud computing and gaming system 116 is configured to receive inputs from the AR HMD 102 over the network 114. The cloud computing and gaming system 116 is configured to process the inputs to affect the state of the AR scenes of the AR environment. The output from the executing AR scenes, such as virtual objects, real-world objects, video data, audio data, and user interaction data, is transmitted to the AR HMD 102. In other implementations, the AR HMD 102 may communicate with the cloud computing and gaming system 116 wirelessly through alternative mechanisms or channels such as a cellular network.
In the illustrated example shown in
In one embodiment, the system is configured to identify an acoustics profile associated with the real-world space 105. In some embodiments, the acoustics profile may include reflective sound and absorbed sound associated with the real-world objects. For example, when a sound output is generated via a real-world object (e.g., audio from television) or a virtual object (e.g., barking from virtual dog) in the real-world space 105, the sound output may cause reflected sound to bounce off the real-world objects 110 (e.g., walls, floor, ceiling, furniture, etc.) that are present in the real-world space 105 before it reaches the ears of the AR user 100. In other embodiments, when a sound output is generated in the real-world space 105,acoustic absorption may occur where the sound output is received as absorbed sound by which the real-world object takes in the sound energy as opposed to reflecting it as reflective sound. In one embodiment, reflective sound and absorbed sound can be determined based on the absorption coefficients of the real-world objects 110. In general, soft, pliable, or porous materials (like cloths) may absorb more sound compared to dense, hard, impenetrable materials (such as metals). In some embodiments, the real-world objects may have reflective sound and absorbed sound where the reflective sound and absorbed sound includes a corresponding magnitude that is based on the location of sound output in the real-world space and its sound intensity. In other embodiments, the reflective sound and absorbed sound associated with the real-world objects are proximate to the location of the virtual object or real-world object that projects the sound output.
As further illustrated in
Throughout the progression of the user’s interaction in the AR environment, the system can automatically detect the voice and sound outputs produced by the corresponding virtual objects and can determine its three-dimensional (3D) location in the AR scene. In one embodiment, using the identified acoustics profile of the real-world space 105, the system is configured to augment the sound and voice output based on the acoustics profile. As a result, when the augmented sound and voice outputs (e.g., 108a′-108n′) are perceived by the user, it may sound more realistic to the AR user 100 since the sound and voice outputs are augmented based on the acoustic characteristics of the real-world space 105.
As illustrated in the example shown in
In one example, when a real-world object 110a (e.g., television) produces a sound output (e.g., TV audio output 206), the sound output may cause reflected sound 202a-202n to bounce off the real-world objects 110 (e.g., walls, floor, ceiling, furniture, etc.) that are present in the real-world space 105 before it reaches the ears of the AR user 100. In one embodiment, the reflected sound 202a-202n may have a corresponding magnitude and direction that corresponds to a sound intensity level of the sound output (e.g., TV audio output 206) produced in the real-world space. As shown in
In one embodiment, the system is configured to examine the size and shape of the real-world objects 110 and its corresponding sound absorption coefficient to identify the acoustics profile of the real-world space 105. For example, the reflected sound 202a associated with the walls may have a greater magnitude than the reflective sound 202b associated with the bookshelf 110d since the walls have a greater surface area and a smaller sound absorption coefficient relative to the bookshelf 110d. Accordingly, the size, shape, and acoustic properties of the real-world objects can affect the acoustics profile of a real-world space 105 and in turn be used to augment the voice output of the virtual character in the real-world space.
In some embodiments, a calibration process can be performed using acoustic sensors 112 to determine the acoustics profile of the real-world space 105. As shown in
In other embodiments, a calibration process can be performed using the AR HMD 102 to determine the acoustics profile of the real-world space 105. In one example, the user 100 may be instructed to move around the real-world space 105 to test and measure the acoustics characteristics at different positions in the real-world space. In one embodiment, the user is instructed to stand a specific position in the real-world room and is prompted to verbally express a phrase (e.g., hello, how are you?). When the user 100 verbally expresses the phrase, microphones of the AR HMD 102 are configured to process the verbal phrase and to measure the acoustic characteristics of the area where the user 100 is located. In some embodiments, the microphones of the AR HMD 102 is configured to measure a variety of acoustic measurements such as frequency response, sound reflection levels, sound absorption levels, how long it takes for frequency energy to decay in the room, magnitude and direction and of the reflected sound, magnitude and direction and of the absorbed sound, reverberations, echoes, etc. Based on the acoustic measurements, the acoustics profile can be determined for the real-world space 105.
In other embodiments, the system is configured to process any sound or sound effect based on the acoustics profile of the real-world space 105 of the user 100. In this way, when the sound is augmented, the augmented sound may sound more realistic to the AR user as if the augmented sound is present in the same real-world space as the AR user.
In one embodiment, the system includes an operation 302 that is configured to identify an acoustics profile of a real-world space 105. In some embodiments, the operation may include a calibration process where acoustic sensors 112 are placed at various locations within the real-world space and configured to measure acoustic characteristics within its surrounding area. In one embodiment, operation 302 is configured to measure a variety of acoustic measurements such as frequency response, sound reflection levels, sound absorption levels, how long it takes for frequency energy to decay in the room, magnitude and direction and of the reflected sound, magnitude and direction and of the absorbed sound, reverberations, echoes, etc. Using the acoustic measurements, the acoustics profile the real-world space 105 can be identified and used to augment the respective sound output 108a-108n of the virtual characters. As noted above, the calibration process can also be performed using the AR HMD 102 to determine the acoustics profile of the real-world space 105. In one example, the user 100 may be instructed to move around the real-world space 105 to test and measure the acoustics characteristics at various locations in the real-world space. When the user is prompted to speak or to generate a sound output, the microphones of the AR HMD 102 are configured to capture the acoustic measurements which can be used generate the acoustics profile the real-world space 105.
As further illustrated in
In some embodiments, the sound output augment processor 304 is configured to process the acoustics profile of the real-world space 105. Using the position coordinates of the virtual objects 106a-106n and their respective sound outputs 108a-108n, the sound output augment processor 304 is configured to augment the sound outputs 108a-108n based on the acoustics profile and the position of the virtual objects 106a-106n to generate augmented sound outputs 108a′-108n′ which can be audible by the AR user 100. For example, the acoustics profile of the real-world space 105 includes acoustic characteristics such as reflective sound 202 and absorbed sound 204 associated with the real-world objects 110a-110n in the room, e.g., walls, floors, ceiling, sofa, cabinet, bookshelf, television, etc. When the sound outputs 108a-108n are augmented to produce the augmented sound outputs 108a′-108n′, the augmented sound outputs 108a′-108n′ may appear more realistic to the user since the sound augment processor 304 takes into consideration the acoustic properties of the real-world objects and the location in the room where the sound output was projected by the virtual object.
In some embodiments, operation 306 is configured to transmit the augmented sound output 108a′-108n′ to the AR user 100 during the user’s interaction with the AR scene 104 which can be audible via an AR HMD 102. In other embodiments, the augmented sound output 108a′-108n′ can be transmitted to a surround sound system (e.g., 5.1-channel surround sound configuration, 7.1-channel surround sound configuration, etc.) in the real-world room 105. In some embodiments, when the augmented sound output 108a′-108n′ is delivered through the surround sound system, the surround sound system may provide a spatial relationship of the sound output produced by the virtual objects. For example, if a virtual character (e.g., 106a) is sitting in the corner of the real-world room 105 and is surrounded by windows, the augmented sound output may be perceived by the AR user 100 as sound being projected from the corner of the real-world room and the sound may appear as if it is reflected off of the windows. Accordingly, the augmented sound output 108a′-108n′ of the virtual objects may take into consideration the spatial relationship of the position of the virtual object relative to the AR user 100.
In some embodiments, operation 306 is configured to segment out specific types of sound sources from the augmented sound output 108a′-108n′. In one embodiment, operation 306 may remove various types of sounds, reflected sound, absorbed sound, and other types of sounds from the augmented sound output 108a′-108n′. In one embodiment, the segmentation enables the isolation of frequencies associated with the sound output and enable certain sounds to be selectively removed or added to the augmented sound output 108a′-108n′. In one example, operation 306 is configured to remove and eliminate specific sounds from the augmented sound output 108a′-108n′ so that it is inaudible to the user. For instance, if a television is located in the real-world living room of the AR user, sound produced by the television can be it may be removed from the augmented sound outputs so that it is inaudible to the AR user.
In other embodiments, the augmented sound outputs can be modified to remove specific sound components (e.g., virtual dog barking, children screaming, roommates talking, etc.) so that the selected sounds are inaudible to the AR user. In one embodiment, additional sounds can be added to the augmented sound outputs 108a′-108n′ to provide the user 100 with a customized AR experience. For example, if a virtual dog (e.g., 106n) barks, additional barking sounds can be added to the augmented sound output 108n′ to make it appear as if a pack of dogs are present in the real-world space. In other embodiments, sound components from specific regions in the real-world space can removed from the augmented sound outputs 108a′-108n′ so that it is inaudible to the AR user. In this way, specific sound components can be selectively removed to modify the augmented sound output and to provide the AR user with a customized experience. In other embodiments, operation 306 is configured to further customize the augmented sound outputs 108a′-108n′ by changing the tone, sound intensity, pitch, volume, and other characteristics based on the context of the AR environment. For example, if the virtual characters are watching a boxing fight and the boxer that they are cheering for is on the verge of winning the fight, the augmented sound output of the virtual characters may be adjusted to increase the sound intensity and volume so that it corresponds with what is occurring in the boxing fight. In another embodiment, operation 306 is configured to further customize the augmented sound outputs 108a′-108n′ by replacing the augmented sound outputs with an alternate sound or based on the preferences of the AR user. For example, if a virtual dog (e.g., 106n) barks, the barking sound can be translated or replaced with an alternate sound such as a cat meowing, a human speaking, etc. In another example, if the virtual object 106a speaks, the augmented sound output can be modified so that it sounds like the AR user’s favorite game character.
In one embodiment, the acoustics profile model 402 is configured to receive as input the contextual data 404 to predict an acoustics profile 406 associated with the real-world space 105. In some embodiments, other inputs that are not direct inputs may also be taken as inputs to the acoustics profile model 402. In one embodiment, the acoustics profile model 402 may also use a machine learning model that is used to identify the real-world objects 110 that are present in the real-world space 105 and the properties associated with the real-world objects 110. For example, the machine learning model can be used to identify that the AR user is sitting on a chair made of rubber and that its corresponding sound absorption coefficient is 0.05. Accordingly, the acoustics profile model 402 can be used to generate a prediction for the acoustics profile 406 of the real-world space which may include reflective sound and absorbed sound associated with the real-world objects. In some embodiments, the acoustics profile model 402 is configured to receive as inputs the acoustic measurements collected from the acoustic sensors 112 and the measurements collected form the microphone of the AR HMD 102. Using the noted inputs, the acoustics profile model 402 may also be used to identify patterns, similarities, and relationships between the inputs to generate a prediction for the acoustics profile 406. Over time, the acoustics profile model 402 can be further refined and the model can be trained to learn and accurately predict the acoustics profile 406 of a real-world space.
After generating a prediction for the acoustics profile 406 of the real-world space 105, the method flows to the cloud computing and gaming system 116 where the cloud computing and gaming system 116 is configured to process the acoustics profile 406. In one embodiment, the cloud computing and gaming system 116 may include a sound output augment processor 304 that is configured to identify the sound output 108 of the virtual objects in the AR scene 104. In some embodiments, using the acoustics profile 406 of the real-world space 105, the sound output augment processor 304 is configured to augment the sound output 108 based on the acoustics profile 406 in substantial real-time to produce the augmented sound output 108′ for transmission to the AR scene. Accordingly, the augmented sound output 108′ can be audible to the AR user 100 while the user is immersed in the AR environment and interacting with the virtual objects.
In some embodiments, the cloud computing and gaming system 116 can access a data storage 408 to retrieve data that can be used by the sound output augment processor 304 to augment the sound output 108. In one embodiment, the data storage 408 may include information related to the acoustic properties of the real-world objects such as the sound absorption coefficient of various materials. For example, using the sound absorption coefficient, the predicted acoustics profile 406 which includes a prediction of reflective sound and absorbed sound associated with real-world objects can be further adjusted to be more accurate. In other embodiments, the data storage 408 may include templates corresponding to the type of changes to be adjusted to the sound output 108 based on the acoustics profile 406 and the contextual data of the AR scene, e.g., intensity, pitch, volume, tone, etc.
In some embodiments, the data storage 408 may include a user profile of the user which can include preferences, interests, disinterests, etc. of the user. For example, the user profile may indicate when the user is immersed in the AR environment, the user likes to be fully disconnected from sounds originating from the real-world. Accordingly, using the user profile, the sound output augment processor 304 can generate an augmented sound output 108′ that excludes sound coming from friends, family, dogs, street traffic, and other sounds that may be present in the real-world space.
In one example, as shown in AR scene 104 illustrated in
Memory 604 stores applications and data for use by the CPU 602. Storage 606 provides non-volatile storage and other computer readable media for applications and data and may include fixed disk drives, removable disk drives, flash memory devices, and CD-ROM, DVD-ROM, Blu-ray, HD-DVD, or other optical storage devices, as well as signal transmission and storage media. User input devices 608 communicate user inputs from one or more users to device 600, examples of which may include keyboards, mice, joysticks, touch pads, touch screens, still or video recorders/cameras, tracking devices for recognizing gestures, and/or microphones. Network interface 614 allows device 600 to communicate with other computer systems via an electronic communications network, and may include wired or wireless communication over local area networks and wide area networks such as the internet. An audio processor 612 is adapted to generate analog or digital audio output from instructions and/or data provided by the CPU 602, memory 604, and/or storage 606. The components of device 600, including CPU 602, memory 604, data storage 606, user input devices 608, network interface 610, and audio processor 612 are connected via one or more data buses 622.
A graphics subsystem 620 is further connected with data bus 622 and the components of the device 600. The graphics subsystem 620 includes a graphics processing unit (GPU) 616 and graphics memory 618. Graphics memory 618 includes a display memory (e.g., a frame buffer) used for storing pixel data for each pixel of an output image. Graphics memory 618 can be integrated in the same device as GPU 608, connected as a separate device with GPU 616, and/or implemented within memory 604. Pixel data can be provided to graphics memory 618 directly from the CPU 602. Alternatively, CPU 602 provides the GPU 616 with data and/or instructions defining the desired output images, from which the GPU 616 generates the pixel data of one or more output images. The data and/or instructions defining the desired output images can be stored in memory 604 and/or graphics memory 618. In an embodiment, the GPU 616 includes 3D rendering capabilities for generating pixel data for output images from instructions and data defining the geometry, lighting, shading, texturing, motion, and/or camera parameters for a scene. The GPU 616 can further include one or more programmable execution units capable of executing shader programs.
The graphics subsystem 614 periodically outputs pixel data for an image from graphics memory 618 to be displayed on display device 610. Display device 610 can be any device capable of displaying visual information in response to a signal from the device 600, including CRT, LCD, plasma, and OLED displays. Device 600 can provide the display device 610 with an analog or digital signal, for example.
It should be noted, that access services, such as providing access to games of the current embodiments, delivered over a wide geographical area often use cloud computing. Cloud computing is a style of computing in which dynamically scalable and often virtualized resources are provided as a service over the Internet. Users do not need to be an expert in the technology infrastructure in the “cloud” that supports them. Cloud computing can be divided into different services, such as Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Cloud computing services often provide common applications, such as video games, online that are accessed from a web browser, while the software and data are stored on the servers in the cloud. The term cloud is used as a metaphor for the Internet, based on how the Internet is depicted in computer network diagrams and is an abstraction for the complex infrastructure it conceals.
A game server may be used to perform the operations of the durational information platform for video game players, in some embodiments. Most video games played over the Internet operate via a connection to the game server. Typically, games use a dedicated server application that collects data from players and distributes it to other players. In other embodiments, the video game may be executed by a distributed game engine. In these embodiments, the distributed game engine may be executed on a plurality of processing entities (PEs) such that each PE executes a functional segment of a given game engine that the video game runs on. Each processing entity is seen by the game engine as simply a compute node. Game engines typically perform an array of functionally diverse operations to execute a video game application along with additional services that a user experiences. For example, game engines implement game logic, perform game calculations, physics, geometry transformations, rendering, lighting, shading, audio, as well as additional in-game or game-related services. Additional services may include, for example, messaging, social utilities, audio communication, game play replay functions, help function, etc. While game engines may sometimes be executed on an operating system virtualized by a hypervisor of a particular server, in other embodiments, the game engine itself is distributed among a plurality of processing entities, each of which may reside on different server units of a data center.
According to this embodiment, the respective processing entities for performing the operations may be a server unit, a virtual machine, or a container, depending on the needs of each game engine segment. For example, if a game engine segment is responsible for camera transformations, that particular game engine segment may be provisioned with a virtual machine associated with a graphics processing unit (GPU) since it will be doing a large number of relatively simple mathematical operations (e.g., matrix transformations). Other game engine segments that require fewer but more complex operations may be provisioned with a processing entity associated with one or more higher power central processing units (CPUs).
By distributing the game engine, the game engine is provided with elastic computing properties that are not bound by the capabilities of a physical server unit. Instead, the game engine, when needed, is provisioned with more or fewer compute nodes to meet the demands of the video game. From the perspective of the video game and a video game player, the game engine being distributed across multiple compute nodes is indistinguishable from a non-distributed game engine executed on a single processing entity, because a game engine manager or supervisor distributes the workload and integrates the results seamlessly to provide video game output components for the end user.
Users access the remote services with client devices, which include at least a CPU, a display and I/O. The client device can be a PC, a mobile phone, a netbook, a PDA, etc. In one embodiment, the network executing on the game server recognizes the type of device used by the client and adjusts the communication method employed. In other cases, client devices use a standard communications method, such as html, to access the application on the game server over the internet.
It should be appreciated that a given video game or gaming application may be developed for a specific platform and a specific associated controller device. However, when such a game is made available via a game cloud system as presented herein, the user may be accessing the video game with a different controller device. For example, a game might have been developed for a game console and its associated controller, whereas the user might be accessing a cloud-based version of the game from a personal computer utilizing a keyboard and mouse. In such a scenario, the input parameter configuration can define a mapping from inputs which can be generated by the user’s available controller device (in this case, a keyboard and mouse) to inputs which are acceptable for the execution of the video game.
In another example, a user may access the cloud gaming system via a tablet computing device, a touchscreen smartphone, or other touchscreen driven device. In this case, the client device and the controller device are integrated together in the same device, with inputs being provided by way of detected touchscreen inputs/gestures. For such a device, the input parameter configuration may define particular touchscreen inputs corresponding to game inputs for the video game. For example, buttons, a directional pad, or other types of input elements might be displayed or overlaid during running of the video game to indicate locations on the touchscreen that the user can touch to generate a game input. Gestures such as swipes in particular directions or specific touch motions may also be detected as game inputs. In one embodiment, a tutorial can be provided to the user indicating how to provide input via the touchscreen for gameplay, e.g., prior to beginning gameplay of the video game, so as to acclimate the user to the operation of the controls on the touchscreen.
In some embodiments, the client device serves as the connection point for a controller device. That is, the controller device communicates via a wireless or wired connection with the client device to transmit inputs from the controller device to the client device. The client device may in turn process these inputs and then transmit input data to the cloud game server via a network (e.g., accessed via a local networking device such as a router). However, in other embodiments, the controller can itself be a networked device, with the ability to communicate inputs directly via the network to the cloud game server, without being required to communicate such inputs through the client device first. For example, the controller might connect to a local networking device (such as the aforementioned router) to send to and receive data from the cloud game server. Thus, while the client device may still be required to receive video output from the cloud-based video game and render it on a local display, input latency can be reduced by allowing the controller to send inputs directly over the network to the cloud game server, bypassing the client device.
In one embodiment, a networked controller and client device can be configured to send certain types of inputs directly from the controller to the cloud game server, and other types of inputs via the client device. For example, inputs whose detection does not depend on any additional hardware or processing apart from the controller itself can be sent directly from the controller to the cloud game server via the network, bypassing the client device. Such inputs may include button inputs, joystick inputs, embedded motion detection inputs (e.g., accelerometer, magnetometer, gyroscope), etc. However, inputs that utilize additional hardware or require processing by the client device can be sent by the client device to the cloud game server. These might include captured video or audio from the game environment that may be processed by the client device before sending to the cloud game server. Additionally, inputs from motion detection hardware of the controller might be processed by the client device in conjunction with captured video to detect the position and motion of the controller, which would subsequently be communicated by the client device to the cloud game server. It should be appreciated that the controller device in accordance with various embodiments may also receive data (e.g., feedback data) from the client device or directly from the cloud gaming server.
It should be understood that the various embodiments defined herein may be combined or assembled into specific implementations using the various features disclosed herein. Thus, the examples provided are just some possible examples, without limitation to the various implementations that are possible by combining the various elements to define many more implementations. In some examples, some implementations may include fewer elements, without departing from the spirit of the disclosed or equivalent implementations.
Embodiments of the present disclosure may be practiced with various computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers and the like. Embodiments of the present disclosure can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a wire-based or wireless network.
Although the method operations were described in a specific order, it should be understood that other housekeeping operations may be performed in between operations, or operations may be adjusted so that they occur at slightly different times or may be distributed in a system which allows the occurrence of the processing operations at various intervals associated with the processing, as long as the processing of the telemetry and game state data for generating modified game states and are performed in the desired way.
One or more embodiments can also be fabricated as computer readable code on a computer readable medium. The computer readable medium is any data storage device that can store data, which can be thereafter be read by a computer system. Examples of the computer readable medium include hard drives, network attached storage (NAS), read-only memory, random-access memory, CD-ROMs, CD-Rs, CD-RWs, magnetic tapes and other optical and non-optical data storage devices. The computer readable medium can include computer readable tangible medium distributed over a network-coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
In one embodiment, the video game is executed either locally on a gaming machine, a personal computer, or on a server. In some cases, the video game is executed by one or more servers of a data center. When the video game is executed, some instances of the video game may be a simulation of the video game. For example, the video game may be executed by an environment or server that generates a simulation of the video game. The simulation, on some embodiments, is an instance of the video game. In other embodiments, the simulation maybe produced by an emulator. In either case, if the video game is represented as a simulation, that simulation is capable of being executed to render interactive content that can be interactively streamed, executed, and/or controlled by user input.
Although the foregoing embodiments have been described in some detail for purposes of clarity of understanding, it will be apparent that certain changes and modifications can be practiced within the scope of the appended claims. Accordingly, the present embodiments are to be considered as illustrative and not restrictive, and the embodiments are not to be limited to the details given herein, but may be modified within the scope and equivalents of the appended claims.