Method and apparatus for improving efficiency without increasing latency in graphics processing

Information

  • Patent Grant
  • 11904233
  • Patent Number
    11,904,233
  • Date Filed
    Friday, February 12, 2021
    4 years ago
  • Date Issued
    Tuesday, February 20, 2024
    11 months ago
Abstract
In methods and apparatuses for reducing latency in graphics processing inputs are received and a first set of frames is generated and stored. Once all of the frames in the first set of frames have been produced, they may be delivered to a GPU thread. Each frame is then rendered by the GPU. Starting processing of frames after one or more of the frames have been stored increases a likelihood that the GPU thread will produce rendered frames without stalling. It is emphasized that this abstract is provided to comply with the rules requiring an abstract that will allow a searcher or other reader to quickly ascertain the subject matter of the technical disclosure. It is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims.
Description
CROSS-REFERENCE TO RELATED APPLICATION

This application is related to commonly-assigned, provisional application Ser. No. 61/666,628, filed Jun. 29, 2012, and entitled “DETERMINING TRIGGERS FOR CLOUD-BASED EMULATED GAMES”, the entire disclosures of which are incorporated herein by reference.


This application is related to commonly-assigned, provisional application Ser. No. 61/666,645, filed Jun. 29, 2012, and entitled “HAPTIC ENHANCEMENTS FOR EMULATED VIDEO GAME NOT ORIGINALLY DESIGNED WITH HAPTIC CAPABILITIES”, the entire disclosures of which are incorporated herein by reference.


This application is related to commonly-assigned, provisional application Ser. No. 61/666,665, filed Jun. 29, 2012, and entitled “CONVERSION OF HAPTIC EVENTS INTO SCREEN EVENTS”, the entire disclosures of which are incorporated herein by reference.


This application is related to commonly-assigned, provisional application Ser. No. 61/666,679, filed Jun. 29, 2012, and entitled “SUSPENDING STATE OF CLOUD-BASED LEGACY APPLICATIONS”, the entire disclosures of which are incorporated herein by reference.


This application is related to commonly-assigned, application Ser. No. 13/631,725, now U.S. Pat. No. 9,248,374, filed Sep. 28, 2012, and entitled “REPLAY AND RESUMPTION OF SUSPENDED GAME” to Brian Michael Christopher Watson, Victor Octav Suba Miura, Jacob P. Stine and Nicholas J. Cardell, filed the same day as the present application, the entire disclosures of which are incorporated herein by reference.


This application is related to commonly-assigned, co-pending application Ser. No. 13/631,740, now U.S. Pat. No. 9,707,476, filed the same day as the present application, and entitled “METHOD FOR CREATING A MINI-GAME” to Brian Michael Christopher Watson, Victor Octav Suba Miura, and Jacob P. Stine, the entire disclosures of which are incorporated herein by reference.


This application is related to commonly-assigned, co-pending application Ser. No. 13/631,785, now U.S. Pat. No. 9,694,276, filed Sep. 28, 2012, and entitled “PRE-LOADING TRANSLATED CODE IN CLOUD BASED EMULATED APPLICATIONS”, to Jacob P. Stine, Victor Octav Suba Miura, Brian Michael Christopher Watson, and Nicholas J. Cardell the entire disclosures of which are incorporated herein by reference.


This application is related to commonly-assigned, co-pending application Ser. No. 13/631,803, Published as U.S. Patent Application Publication Number 2014-0092087, filed Sep. 28, 2012, and entitled “ADAPTIVE LOAD BALANCING IN SOFTWARE EMULATION OF GPU HARDWARE”, to Takayuki Kazama and Victor Octav Suba Miura, the entire disclosures of which are incorporated herein by reference.


FIELD OF THE DISCLOSURE

The present disclosure is related to video game emulation. Among other things, this application describes a method and apparatus for reducing the latency in emulation of a computer game program.


BACKGROUND OF THE INVENTION

In a cloud-based gaming system the majority of the processing takes place on the cloud-based server. This allows the client device platform that is communicating with the cloud-based server to have minimal processing power. However, shifting the processing requirements to the cloud increases the possibilities of latencies disrupting the game playing experience. For example, in a first-person shooter game long latencies may reduce a user's reaction time, and therefore cause the user to be shot when he would otherwise have had time to avoid an incoming attack.


The latencies in a cloud-based gaming system may originate from several different sources such as, the network, the client side, the server side, or any combination thereof. By way of example, latencies may be caused by congestion on the network. If a network does not have sufficient bandwidth, the data transfers between the cloud-based gaming system and the client device platform may be delayed. Latencies on the client side may be a result of buffering the incoming data, or even due to variations in the refresh rate of the client's monitor. Additionally, latencies originating on the server side may include the time it takes to process input data in order to return output data to the client device platform. Therefore, increasing the speed that a cloud-based server processes data may result in substantial reductions in the latency of the system.


On a cloud-based system, the client device platform and the network speed may vary between many users. However, the processing capabilities of the server side are the same for each user of the system. Therefore, reductions in latency on the server side will decrease the latency for all users of the system. One solution for increasing the processing speed on the server is to have the cloud-based gaming system run as many operations in parallel as possible. However, running operations in parallel may not help reduce latencies when a game is first started, because at the initiation of the game there may not be any data buffered for the cloud-based gaming system to operate on. Therefore, running operations in parallel during the initiation of a game may not reduce latencies.


It is within this context that aspects of the present disclosure arise.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a schematic diagram of a client device platform and an emulator communicating over a network.



FIG. 2 is a flow diagram describing a method for reducing the latency of an emulator operating on a network.



FIG. 3 is a schematic diagram of the client device platform generating a game input while displaying a game in a first state and thereafter receiving an encoded frame of the second state after the emulator has processed the game input.



FIG. 4 is block diagram describing the instructions for how the emulator reduces the latency while processing game inputs according to an aspect of the present disclosure.





DETAILED DESCRIPTION OF THE DRAWINGS

Although the following detailed description contains many specific details for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the present disclosure. Accordingly, the aspects of the present disclosure described below are set forth without any loss of generality to, and without imposing limitations upon, the claims that follow this description.


Aspects of the present disclosure, describe a method and apparatus may be used to enhance efficiency in emulation of a computer program that involves emulation of both a central processing unit (CPU) and a graphics processing unit (GPU). Certain aspects are particularly advantageously for reducing the latency of the emulation of a computer game over a cloud-based network, particularly where the bottlenecks in processing that can lead to latency are due to the CPU as opposed to the GPU. As used herein, the term “latency” refers to the time delay between the generation of a game input by a client device platform when the game is in a first state and the display of the second state of the game by the client device platform. The game is advanced from the first state to the second state by the emulator processing the game inputs and delivering the resulting frame depicting the second state back to the client device platform. Aspects of the present disclosure describe an emulator that is configured to emulate a client device platform. The emulator may be comprised of an emulated central processing unit (CPU), an emulated graphics processing unit (GPU), and an emulated encoder, each of which may be operated in parallel. However, in order reduce the latency in the emulator, the emulated GPU is delayed until a first set of frames is generated by the emulated CPU. Delaying the start of the emulated GPU allows the emulated GPU to have multiple frames to operate on from the start instead of having to process a single frame at a time. Once the buffer has been built up, the emulated GPU may begin processing frames in parallel with the emulated CPU.


General-purpose cloud server architectures have aggressive power-saving features that create some latency between the time a thread begins processing data and the point when the CPU achieves maximum compute throughput. Therefore it is advantageous to queue as sizable a workload as possible before starting the GPU thread, so that the GPU thread has the minimum potential of draining its workload before the frame has been completed. If the GPU runs out of work ahead of time, the thread will fall asleep and will suffer compute throughput latency when new work is submitted. 2. Cloud server operating systems distribute threads across many cores in round-robin fashion to improve heat dissipation and extend CPU component lifespans (rate varies by OS configuration, 2 ms to 10 ms windows are common for servers configured to deliver interactive multimedia content). Each time a thread is switched to a different core the new core's L1 and L2 cache must be re-primed. When a task, such as the GPU, is able to execute its entire workload quickly without stalls, it increases the likeliness that most or all work is done within a single core instance, and lessens performance lost due to the thread being shifted to a new core. But if the thread stalls frequently during the course of generating a single frame, the operating system may decide to shift it across several different cores in an effort to load-balance against other busier threads. 3. The synchronization model does not benefit the CPU other than by way of simplifying the CPU-GPU communication model so that the CPU is able to spend less time determining if it must await GPU frames to complete. Since the CPU is commonly the latency issue, increasing GPU slightly in favor of reducing the CPU latency more substantially results in an overall latency reduction. However, this may change with the advent of APU processing (integrated CPU and GPU, whereby using the GPU resources can negatively impact available compute power along the CPU pipeline). 4. The model scales well to running multiple instances on a single cloud server which, in turn, can significantly reduce operational cost of the product. By having GPU jobs execute in short, efficient chunks, e.g., at 16 ms (60 hz) or 32 ms (30 hz) intervals, the efficiency and priority heuristics of the operating system multitasking kernel are improved, along with L1 and L2 cache usage and power-saving features of the underlying hardware. Therefore, overall latency/throughput of concurrent emulation systems hosted from a single server is improved.


By way of example, and not by way of limitation, at the start of gameplay, a client device platform may deliver one or more inputs to the emulator over the network. The emulated CPU receives the inputs and initiates the generation of a first set of frames. When a frame is generated by the emulated GPU, it is stored in a buffer on the emulator. Once all of the frames in the first set of frames have been produced by the emulated CPU, the contents of the buffer may be delivered to the emulated GPU. Each frame is then rendered by the emulated GPU in order to create rendered frames. The rendered frames may then be delivered to an encoder. Once received by the emulated encoder, the rendered frames are encoded and delivered to the client device platform over the network.



FIG. 1 is a schematic of an embodiment of the present invention. Emulator 107 may be accessed by a client device platform 104 over a network 160. Client device platform 104 may access alternative emulators 107 over the network 160. Emulators 107 may be identical to each other, or they may each be programed to emulate unique game program titles 106 or unique sets of game program titles 106.


Client device platform 104 may include a central processor unit (CPU) 131. By way of example, a CPU 131 may include one or more processors, which may be configured according to, e.g., a dual-core, quad-core, multi-core, or Cell processor architecture. Client device platform 104 may also include a memory 132 (e.g., RAM, DRAM, ROM, and the like). The CPU 131 may execute a process-control program 133, portions of which may be stored in the memory 132. The client device platform 104 may also include well-known support circuits 140, such as input/output (I/O) circuits 141, power supplies (P/S) 142, a clock (CLK) 143 and cache 144. The client device platform 104 may optionally include a mass storage device 134 such as a disk drive, CD-ROM drive, tape drive, or the like to store programs and/or data. The client device platform 104 may also optionally include a display unit 137 and a user interface unit 138 to facilitate interaction between the client device platform 104 and a user. The display unit 137 may be in the form of a cathode ray tube (CRT) or flat panel screen that displays text, numerals, or graphical symbols. The user interface unit 138 may include a keyboard, mouse, joystick, touch pad, game controller, light pen, or other device. A controller 145 may be connected to the client device platform 104 through the I/O circuit 141 or it may be directly integrated into the client device platform 104. The controller 145 may facilitate interaction between the client device platform 104 and a user. The controller 145 may include a keyboard, mouse, joystick, light pen, hand-held controls or other device. The controller 145 may be capable of generating a haptic response 146. By way of example and not by way of limitation, the haptic response 146 may be vibrations or any other feedback corresponding to the sense of touch. The client device platform 104 may include a network interface 139, configured to enable the use of Wi-Fi, an Ethernet port, or other communication methods.


The network interface 139 may incorporate suitable hardware, software, firmware or some combination of two or more of these to facilitate communication via an electronic communications network 160. The network interface 139 may be configured to implement wired or wireless communication over local area networks and wide area networks such as the Internet. The client device platform 104 may send and receive data and/or requests for files via one or more data packets over the network 160.


The preceding components may exchange signals with each other via an internal system bus 150. The client device platform 104 may be a general purpose computer that becomes a special purpose computer when running code that implements embodiments of the present invention as described herein.


The emulator 107 may include a central processor unit (CPU) 131′. By way of example, a CPU 131′ may include one or more processors, which may be configured according to, e.g., a dual-core, quad-core, multi-core, or Cell processor architecture. The emulator 107 may also include a memory 132′ (e.g., RAM, DRAM, ROM, and the like). The CPU 131′ may execute a process-control program 133′, portions of which may be stored in the memory 132′. The process-control program 133′ may include programs that emulate a different systems designed to play one or more games 106. The different system may be a so-called “legacy” system, e.g., an older system. Game programs originally configured to be run on the legacy are sometimes referred to herein as “legacy games”.


By way of example, the CPU of a legacy system may be emulated by the emulated CPU 101 and the GPU of the legacy system may be emulated by the emulated GPU 102. The emulator may optionally be coupled to an encoder 103, which may be implemented on the CPU 103 or on a separate processor. The emulated CPU 101 and the emulated GPU 102 and the (optional) encoder 103 may be configured to operate in parallel. The emulator 107 may also include well-known support circuits 140′, such as input/output (I/O) circuits 141′, power supplies (P/S) 142′, a clock (CLK) 143′ and cache 144′. The emulator 107 may optionally include a mass storage device 134′ such as a disk drive, CD-ROM drive, tape drive, or the like to store programs and/or data. The emulator 107 may also optionally include a display unit 137′ and user interface unit 138′ to facilitate interaction between the emulator 107 and a user who requires direct access to the emulator 107. The display unit 137′ may be in the form of a cathode ray tube (CRT) or flat panel screen that displays text, numerals, or graphical symbols. The user interface unit 138′ may include a keyboard, mouse, joystick, light pen, or other device. The emulator 107 may include a network interface 139′, configured to enable the use of Wi-Fi, an Ethernet port, or other communication methods.


The network interface 139′ may incorporate suitable hardware, software, firmware or some combination of two or more of these to facilitate communication via the electronic communications network 160. The network interface 139′ may be configured to implement wired or wireless communication over local area networks and wide area networks such as the Internet. The emulator 107 may send and receive data and/or requests for files via one or more data packets over the network 160.


The preceding components may exchange signals with each other via an internal system bus 150′. The emulator 107 may be a general purpose computer that becomes a special purpose computer when running code that implements embodiments of the present invention as described herein.


Emulator 107 may access a game program 106, (e.g., a legacy game program) that has been selected by the client device platform 104 for emulation through the internal system bus 150′. There may be more than one game program 106 stored in the emulator. The game programs may also be stored in the memory 132′ or in the mass storage device 134′. Additionally, one or more game programs 106 may be stored at a remote location accessible to the emulator 107 over the network 160. Each game program 106 contains executable game code 108 that is used by the emulated CPU 101 to generate the frames 212 in response to inputs 211 from the client device platform 104.


By way of example, the game program 106 that is emulated may be any game program that is not compatible with a client device platform 104. By way of example, and not by way of limitation, the game program 106 may be a legacy game designed to be played on Sony Computer Entertainment's PlayStation console, but the client device platform 104 is a home computer. By way of alternative example, the game program 106 may have been designed to be played on a PlayStation 2 console, but the client device platform 104 is a PlayStation 3 console. By way of further example and not by way of limitation, a game program 106 may have been designed to be played on a PlayStation console, but the client device platform 104 is a hand held console such as the PlayStation Vita from Sony Computer Entertainment.



FIG. 2 is a flow diagram of a method 200 for reducing the latency of the emulation of a legacy game 106 over a cloud-based network. FIG. 2 depicts a client device platform 104 communicating with an emulator 107 over a network 160. The dotted arrows represent data being delivered over the network 160. Rectangular boxes represent processing steps, and the parallelograms represent the various forms of data being transferred. The emulator 107 may be comprised of an emulated CPU 101 and an emulated GPU 102. Certain optional parts of the method may be implemented on an encoder 103. The emulated CPU 101, the Emulated GPU 102, and (optionally) the encoder 103 may be operated in parallel with each other.


The emulation method 200 begins with the client device platform 104 generating one or more game inputs 211 at block 251. By way of example, and not by way of limitation, the game inputs 211 may be commands that control the game play of a game program 106. Game inputs 211 which control the game play may include commands that are generally used by a game player to advance the game program 106 from a first state 301 to a second state 302. The game inputs 211 may be generated by a controller 145, or they may be automatically generated by the client device platform 104. Game inputs 211 may include, but are not limited to, inputs that cause a main character in a game program 106 to move to a new position, swing a sword, select an item from a menu, or any other action that can take place during the game play of a game program 106. As shown in FIG. 3, the game input 211 is generated by the game player pressing the X-button 145X. The pressing of the X-button 145X is designated by the button being shaded, whereas the other buttons remain white.



FIG. 3 is a simplified schematic diagram of the emulation process depicting the advancement from the first state 301 to the second state 302. For purposes of clarity, the processing that takes place within the emulator 107 has been omitted from FIG. 3. The first state 301, as shown on display screen 137T=0, is comprised of the main character 340 standing to the left of a large crevasse. The second state 302, as shown on display screen 137T=1, is comprised of the main character 340 after it has been instructed, by a game input 211, to jump in the upwards direction. The labels 137T=0 and 137T=1 are used in order to indicate that a period of time has elapsed between the time the game input 211 is generated (T=0) and the time that the result of the game input 211 is first displayed on the client device platform 104 (T=1). The period of time between T=0 and T=1 is considered the latency. The large gap between the main character 340 and the ground in the second state 302 was chosen to clearly indicate a jump has been made. However, it should be noted that the time T=1 is the time at which the first frame of the jump is displayed by the client device platform 104.


Returning to FIG. 2, after the game inputs 211 have been generated, the client device platform 104 delivers them to the emulator 107 over the network 160, as indicated by block 252. The emulator 104 receives the inputs 211 with the emulated CPU 101 at block 253. At this point, the emulated CPU 101 begins processing the game inputs 211 in order to generate a first set of frames 212 at block 254. The emulated CPU may utilize the executable game code 108 of the game program 106 in order to process the game inputs 211. By way of example, and not by way of limitation, the generation of the first set of frames may include the generation of display lists for the frames, the generation of graphics primitives, or any other high level graphics processing operations. Other steps that may be performed by the emulated CPU while it is generating the first set of frames before the frames are ready for rendering by the emulated GPU include, but are not limited to video decoding, audio mixing, and the like which in emulators is often unable to be generated asynchronously from the CPU. The first set of frames 212 may be comprised of one or more individual frames, e.g. approximately two frames, depending on the specifics of the hardware being implemented. By way of example, and not by way of limitation, the optimum quantity in emulation of certain titles is typically two (2) at 60 hz. This is because the great majority of titles for certain legacy platforms, such as the PlayStation (sometimes known as the PlayStation 1 or PS1) run their CPU-side update logic at 30 hz, not 60 hz. The second frame is a tween or interlace frame meant to improve visual animation fluidity, and does not vary based on user input. Interlocking the CPU in between these 30 hz frame-pairs does not reduce latency or improve gameplay experience. This behavior can usually be determined based on the video mode selection made by the legacy title.


After each individual frame in the first group of frames 212 is processed, it is stored in a buffer as indicated by block 255. The buffer may be in the memory 132′ of the emulator 107. By way of example, it may take approximately 10-12 milliseconds to finish processing the entire first set of frames 212 and store them all in the buffer. Once all of the frames in the first set of frames 212 have been stored in the buffer, the emulated CPU 101 may deliver the first set of frames 212 to the emulated GPU 102 as indicated by block 256. Alternatively, the emulated CPU 101 may send the location of first set of frames 212 to the emulated GPU 102, and the emulated GPU 102 may then retrieve the first set of frames 212 from the buffer.


At block 257 the emulated GPU 102 receives the first set of frames 212. Until this time, the emulated GPU 102 has been idle. It would appear that keeping one of the processing units idle for a period of time would increase the latency of the emulator, but the inventors have determined that this is not the case. Delaying the start of the emulated GPU 102 allows a large buffer of work to be available for the emulated GPU 102 to process. Further, the processing by the emulated GPU 102 may then be done in parallel with the emulated CPU 101 while it begins processing of a second set of frames 212′. Further, by waiting for the emulated CPU 101 to finish processing the first set of frames 212 before the emulated GPU 102 is initiated, the emulated CPU 101 may run more efficiently.


The emulated GPU 102 begins rendering the first set of frames 212 at block 258. Rendering the frames may comprise processing the frames according to a standard graphics pipeline. By way of example and not by way of limitation, a standard graphics pipeline may include vertex processing, clipping, primitive assembly, triangle setup, rasterization, occlusion culling, parameter interpolation, pixel shading, and frame buffering. Further by way of example, and not by way of limitation, the rasterization may be tile-based rasterization. Tile-based rasterization is described in detail in commonly-assigned, co-pending application Ser. No. 13/631,803, (Published as U.S. Patent Application Publication Number 20140092087), the entire disclosure of which has been incorporated by reference. Rendered frames 213 may then optionally be delivered to the encoder 103 at block 259. The rendered frames 213 may be delivered once all frames in the first group of frames 212 have been rendered, or each rendered frame 213 may be delivered to the encoder 103 immediately after it has been rendered. Additionally, the rendered frames 213 may be stored in a frame buffer in a memory 132′ on the emulator 107 and the encoder 103 may be provided with the location of the rendered frames so that it may retrieve the rendered frames 213.


At block 260 the encoder 103 may optionally receive the rendered frames 213. Thereafter the encoder 103 may optionally initiate the encoding process. The rendered frames 213 may be encoded according to a proprietary or a standard codec. The encoder 103 may utilize I-frames, P-frames, and B-frames, or any combination thereof. By way of the example, and not by way of limitation, the emulated encoder 103 may use MPEG-4, H.264/MPEG-4 AVC, or WMV codecs. Once the frames have been encoded, the encoded frames 214 may be delivered to the client device platform 104 over the network 160. The client device platform may receive the encoded frames 214 at block 263.


As shown in FIG. 4, a set of emulator instructions 470 may be implemented, e.g., by the emulator 107. The emulator instructions 470 may be formed on a non-transitory computer readable medium such as the memory 132′ or the mass storage device 134′. The emulator instructions 470 may also be part of the process control program 133′. The emulator instructions may also be implemented through separate emulation programs such as the emulated CPU 101, the emulated GPU 102, or the emulated encoder 103, or it may be implemented by any combination thereof.


The instructions include instructions for receiving inputs 211, e.g., over the network 160 from the client device platform 104, as indicated at 473. Thereafter the emulated CPU 101 may be instructed to begin processing the inputs 211 in order to generate a first set of frames 212, e.g., by executing instructions as indicated at 474. Next, the emulator 107 may be instructed to store each of the frames from the first set of frames 212 into a buffer on the emulator 107 by executing instructions as indicated at 475. Once all of the frames from the first set of frames 212 have been generated, the emulator 107 may be instructed to deliver the first set of frames to the emulated GPU 102 by executing instructions as indicated at 476. The emulated GPU 102 may be provided with instructions for receiving the first set of frames 212 as indicated at 477. At this point the emulated GPU 102 may begin rendering the first set of frames 212 at 478. Until this point, the emulated GPU 102 may have been instructed to be idle in order to allow for a sufficient buffer to be built. The emulator 107 may optionally be further instructed to deliver the rendered frames 213 to the emulated encoder 103 by executing instructions as indicated at 479. The emulated encoder 103 may be provided with instructions for receiving the rendered frames 213 as indicated at 480. When the emulated encoder 103 receives the rendered frames 213, it may optionally be provided with instructions for encoding the first set of rendered frames 213 as indicated at 481. Thereafter, the encoder 103 may optionally be provided with instructions for delivering the encoded first set of frames 214 to the client device platform 104, e.g., over the network 160, as indicated at 482.


While the above is a complete description of the preferred embodiment of the present invention, it is possible to use various alternatives, modifications and equivalents. Therefore, the scope of the present invention should be determined not with reference to the above description but should, instead, be determined with reference to the appended claims, along with their full scope of equivalents. Any feature described herein, whether preferred or not, may be combined with any other feature described herein, whether preferred or not. In the claims that follow, the indefinite article “A”, or “An” refers to a quantity of one or more of the item following the article, except where expressly stated otherwise. The appended claims are not to be interpreted as including means-plus-function limitations, unless such a limitation is explicitly recited in a given claim using the phrase “means for.”

Claims
  • 1. A non-transitory computer readable medium containing program instructions for reducing latency in graphics processing, wherein execution of the program instructions by one or more processors of a computer system causes the one or more processors to carry out a method, the method comprising: starting processing of frames in a first set of frames after one or more of the frames in the first set of frames have been stored; andrendering the first set of frames with a graphics processing unit (GPU) thread on a single processor core of the one or more processors to produce a rendered first set of frames, wherein the GPU thread is instructed to idle until the first set of frames have been stored, wherein the first set of frames includes one or more display lists for at least two frames;wherein starting processing of frames after one or more of the frames in the first set of frames have been stored increases a likelihood that the GPU thread will produce the rendered set of frames without stalling.
  • 2. The non-transitory computer readable medium of claim 1, wherein the method further comprises generating the first set of frames with a central processing unit (CPU), wherein the CPU is configured to generate the first set of frames by processing one or more first inputs from a client device.
  • 3. The non-transitory computer readable medium of claim 2, wherein the CPU begins generating a second set of frames by processing one or more second inputs after the first set of frames have been delivered to a GPU.
  • 4. The non-transitory computer readable medium of claim 3, wherein the GPU renders the first set of frames while the CPU is generating the second set of frames.
  • 5. The non-transitory computer readable medium of claim 1, wherein the method further comprises generating the first set of frames with a central processing unit (CPU), wherein the CPU generates the first set of frames by processing one or more first inputs from a client device according to instructions in a legacy game's executable code.
  • 6. The non-transitory computer readable medium of claim 1, wherein the method further comprises generating the first set of frames with a central processing unit (CPU), wherein generating the first set of frames includes generating graphics primitives for each frame in the first set of frames.
  • 7. The non-transitory computer readable medium of claim 1, wherein the method further comprises generating the first set of frames with a central processing unit (CPU), wherein generating the first set of frames includes high level graphics processing.
  • 8. The non-transitory computer readable medium of claim 1, wherein rendering the first set of frames includes using a graphics pipeline.
  • 9. The non-transitory computer readable medium of claim 8, wherein the graphics pipeline utilizes tile based rasterization.
  • 10. The non-transitory computer readable medium of claim 1, wherein the method further comprises: delivering the rendered first set of frames to an encoder; andencoding the rendered first set of frames to produce an encoded first set of frames.
  • 11. The non-transitory computer readable medium of claim 10, wherein the method further comprises delivering the encoded first set of frames to the client device platform.
  • 12. The non-transitory computer readable medium of claim 1, wherein the GPU is delayed from starting processing any frames until multiple frames in the first set of frames have been stored.
  • 13. A method, comprising: starting processing of frames in a first set of frames after one or more of the frames in the first set of frames have been stored; andrendering the first set of frames with a graphics processing unit (GPU) thread to produce a rendered first set of frames, wherein the GPU thread is instructed to idle until the first set of frames have been stored, wherein the first set of frames includes one or more display lists for at least two frames; andwherein starting processing of frames after one or more of the frames in the first set of frames have been stored increases a likelihood that the GPU thread will produce the rendered set of frames without stalling.
  • 14. The method of claim 13, wherein the GPU is delayed from starting processing any frames until multiple frames in the first set of frames have been stored.
  • 15. A system, comprising: one or more processors;a memory coupled to the one or more processors;one or more instructions embodied in memory for execution by the one or more processors, the instructions being configured to implement a method in, the method comprising:starting processing of frames in a first set of frames after one or more of the frames in the first set of frames have been stored; andrendering the first set of frames with a graphics processing unit GPU thread to produce a rendered first set of frames, wherein the GPU thread is instructed to idle until the first set of frames have been stored, wherein the first set of frames includes one or more display lists for at least two frames;wherein starting processing of frames after one or more of the frames in the first set of frames have been stored increases a likelihood that the GPU thread will produce the rendered set of frames without stalling.
  • 16. The emulator of claim 15, wherein the GPU is delayed from starting processing any frames until multiple frames in the first set of frames have been stored.
  • 17. The system of claim 15, wherein the one or more processors includes a plurality of processor cores.
  • 18. The system of claim 15, wherein the one or more processors includes a plurality of processor cores, wherein starting processing of frames after one or more of the frames in the first set of frames have been stored increases a likelihood that the GPU thread will produce the rendered set of frames within a single core instance.
CLAIM OF PRIORITY

This application is a continuation of U.S. patent application Ser. No. 16/445,092 filed Jun. 18, 2019, the entire disclosures of which are incorporated herein by reference. U.S. patent application Ser. No. 16/445,092 is a continuation of U.S. patent application Ser. No. 15/838,065 filed Dec. 11, 2017, now U.S. Pat. No. 10,350,485, the entire disclosures of which are incorporated herein by reference. U.S. patent application Ser. No. 15/838,065 is a continuation of U.S. patent application Ser. No. 13/631,812 filed Sep. 28, 2012, now U.S. Pat. No. 9,849,372, the entire disclosures of which are incorporated herein by reference.

US Referenced Citations (129)
Number Name Date Kind
6009458 Hawkins et al. Dec 1999 A
6115054 Giles Sep 2000 A
6267673 Miyamoto et al. Jul 2001 B1
6280323 Yamazaki et al. Aug 2001 B1
6402620 Naghi Jun 2002 B1
6631514 Le Oct 2003 B1
6699127 Lobb et al. Mar 2004 B1
6846238 Wells Jan 2005 B2
7159008 Wies et al. Jan 2007 B1
7286132 Kuhne Oct 2007 B2
7400326 Acocella Jul 2008 B1
7470196 Joshi Dec 2008 B1
7493365 Wies et al. Feb 2009 B2
7577826 Suba Aug 2009 B2
7584342 Nordquist Sep 2009 B1
7596647 Van Dyke Sep 2009 B1
7782327 Gonzalez et al. Aug 2010 B2
7841946 Walker et al. Nov 2010 B2
8267796 Iwakiri Sep 2012 B2
8321571 Crowder, Jr. et al. Nov 2012 B2
8661496 Perlman et al. Feb 2014 B2
8935487 Sengupta et al. Jan 2015 B2
9248374 Watson et al. Feb 2016 B2
9258012 Miura Feb 2016 B2
9623327 Miura et al. Apr 2017 B2
9656163 Miura et al. May 2017 B2
9658776 Miura May 2017 B2
9694276 Stine et al. Jul 2017 B2
9707476 Watson et al. Jul 2017 B2
9717989 Miura et al. Aug 2017 B2
9849372 Stine et al. Dec 2017 B2
9925468 Stine et al. Mar 2018 B2
10293251 Stine et al. May 2019 B2
10350485 Stine et al. Jul 2019 B2
10354443 Kazama et al. Jul 2019 B2
10406429 Zalewski Sep 2019 B2
20010031665 Taho et al. Oct 2001 A1
20020002510 Sharp et al. Jan 2002 A1
20020004584 Laughlin Jan 2002 A1
20020045484 Eck et al. Apr 2002 A1
20020065915 Anderson et al. May 2002 A1
20020161566 Uysal et al. Oct 2002 A1
20030037030 Dutta et al. Feb 2003 A1
20030061279 Llewellyn et al. Mar 2003 A1
20030064808 Hecht et al. Apr 2003 A1
20030177187 Levine et al. Sep 2003 A1
20030190950 Matsumoto Oct 2003 A1
20030225560 Garcia et al. Dec 2003 A1
20040160446 Gosalia Aug 2004 A1
20040179019 Sabella et al. Sep 2004 A1
20040224772 Canessa et al. Nov 2004 A1
20040238644 Leaming Dec 2004 A1
20040263519 Andrews Dec 2004 A1
20040266529 Chatani Dec 2004 A1
20050041031 Diard Feb 2005 A1
20050138328 Moy Jun 2005 A1
20050195187 Seiler et al. Sep 2005 A1
20050261062 Lewin et al. Nov 2005 A1
20050288954 McCarthy et al. Dec 2005 A1
20060009290 Taho et al. Jan 2006 A1
20060046819 Nguyen et al. Mar 2006 A1
20060080702 Diez et al. Apr 2006 A1
20060103659 Karandikar May 2006 A1
20060117260 Sloo et al. Jun 2006 A1
20060146057 Blythe Jul 2006 A1
20060148571 Hossack et al. Jul 2006 A1
20060160626 Gatto et al. Jul 2006 A1
20060259292 Solomon et al. Nov 2006 A1
20070060361 Nguyen et al. Mar 2007 A1
20070180438 Suba Aug 2007 A1
20070298866 Gaudiano et al. Dec 2007 A1
20080016491 Doepke Jan 2008 A1
20080032794 Ware et al. Feb 2008 A1
20080055321 Koduri Mar 2008 A1
20080113749 Williams et al. May 2008 A1
20080119286 Brunstetter et al. May 2008 A1
20080256271 Breed Oct 2008 A1
20080263527 Miura Oct 2008 A1
20080282241 Dong Nov 2008 A1
20080300053 Muller Dec 2008 A1
20090016430 Schmit Jan 2009 A1
20090082102 Sargaison et al. Mar 2009 A1
20090088236 Laude et al. Apr 2009 A1
20090094600 Sargaison et al. Apr 2009 A1
20090098943 Weber et al. Apr 2009 A1
20090131177 Pearce May 2009 A1
20090160867 Grossman Jun 2009 A1
20090162029 Glen Jun 2009 A1
20090231348 Mejdrich Sep 2009 A1
20090251475 Mathur Oct 2009 A1
20090253517 Bererton et al. Oct 2009 A1
20090282139 Mejdrich et al. Nov 2009 A1
20090303245 Soupikov et al. Dec 2009 A1
20090322751 Oneppo Dec 2009 A1
20100088296 Periyagaram et al. Apr 2010 A1
20100167809 Perlman et al. Jul 2010 A1
20100214301 Li et al. Aug 2010 A1
20100250650 Allen Sep 2010 A1
20100259536 Toksvig et al. Oct 2010 A1
20110013699 Persson Jan 2011 A1
20110098111 Saito et al. Apr 2011 A1
20110157196 Nave et al. Jun 2011 A1
20110218037 Singh Sep 2011 A1
20110276661 Gujarathi et al. Nov 2011 A1
20110299105 Morrison et al. Dec 2011 A1
20120021840 Johnson et al. Jan 2012 A1
20120052930 Mcgucken Mar 2012 A1
20120142425 Scott et al. Jun 2012 A1
20120299940 Dietrich et al. Nov 2012 A1
20130137518 Lucas May 2013 A1
20130165233 Wada Jun 2013 A1
20140004941 Watson et al. Jan 2014 A1
20140004949 Miura et al. Jan 2014 A1
20140004956 Miura et al. Jan 2014 A1
20140004957 Stine Jan 2014 A1
20140004962 Miura et al. Jan 2014 A1
20140066177 Zalewski Mar 2014 A1
20140092087 Kazama Apr 2014 A1
20140094299 Stine et al. Apr 2014 A1
20140094313 Watson et al. Apr 2014 A1
20140094314 Watson et al. Apr 2014 A1
20140094315 Stine et al. Apr 2014 A1
20170312639 Watson et al. Nov 2017 A1
20170312640 Watson et al. Nov 2017 A1
20180359246 Dannemiller et al. Dec 2018 A1
20190099680 Stine et al. Apr 2019 A1
20190270007 Stine et al. Sep 2019 A1
20190369842 Dolbakian et al. Dec 2019 A1
20210402301 Sherwani et al. Dec 2021 A1
Foreign Referenced Citations (17)
Number Date Country
1192013 Sep 1998 CN
101346162 Mar 2012 CN
101889442 Oct 2014 CN
1172132 Jan 2002 EP
1225767 Jul 2002 EP
2039404 Mar 2009 EP
2040163 Mar 2009 EP
H09146759 Jun 1997 JP
2003256209 Sep 2003 JP
2003298868 Oct 2003 JP
2009072601 Apr 2009 JP
2012034793 Feb 2012 JP
2364938 Aug 2009 RU
0233522 Apr 2002 WO
2004024259 Mar 2004 WO
2008073493 Jun 2008 WO
2014052205 Apr 2014 WO
Non-Patent Literature Citations (107)
Entry
Chinese Office Action for CN Application No. 201380045408.4, dated Sep. 20, 2016.
Communication under EPC Rule 94(3) dated Apr. 23, 2018 in European Patent Application No. 13881307.6.
European Search Report and Written Opinion for EP Application No. 13841130.1, dated Oct. 4, 2016.
European Search Report and Written Opinion for European Application No. PCT/US2013/047856, dated Jul. 28, 2016.
Final Office Action for U.S. Appl. No. 15/650,729, dated Jan. 7, 2019.
Final Office Action for U.S. Appl. No. 15/650,755, dated Jan. 7, 2019.
Final Office Action for U.S. Appl. No. 13/631,725, dated Dec. 19, 2014.
Final Office Action for U.S. Appl. No. 13/631,740, dated Jul. 27, 2015.
Final Office Action for U.S. Appl. No. 13/631,785, dated Dec. 4, 2015.
Final Office Action for U.S. Appl. No. 13/631,803, dated Feb. 1, 2016.
Final Office Action for U.S. Appl. No. 13/631,812, dated Aug. 29, 2014.
Final Office Action for U.S. Appl. No. 13/790,311, dated Jul. 15, 2016.
Final Office Action for U.S. Appl. No. 13/790,320, dated Feb. 10, 2016.
Final Office Action for U.S. Appl. No. 13/790,320, dated Jan. 15, 2015.
Final Office Action for U.S. Appl. No. 13/791,379, dated May 13, 2015.
Final Office Action for U.S. Appl. No. 13/791,420, dated Jun. 11, 2014.
Final Office Action for U.S. Appl. No. 13/791,420, dated Oct. 9, 2015.
Final Office Action for U.S. Appl. No. 13/791,434, dated Feb. 17, 2016.
Final Office Action for U.S. Appl. No. 13/791,434, dated Jun. 23, 2015.
Final Office Action for U.S. Appl. No. 15/019,891, dated Oct. 19, 2016.
Final Office Action for U.S. Appl. No. 13/631,803, dated Apr. 16, 2015.
Final Office Action for U.S. Appl. No. 13/790,311, dated Mar. 27, 2015.
First Examination Report dated Feb. 23, 2018 for Indian Patent Application 3524/CHE/2013.
Grand Theft Auto: San Andreas Guide—Territories, https://www.youtube.com/watch?v=5d2GY-gr, May 29, 2012.
GTA San Andreas How to start a gang war, https://www.youtube.com/watch?v=9N4908kGtLO, Jan. 13, 2013.
International Search Report and Written Opinion for International Application No. PCT/US2013/074813, dated May 29, 2014.
Japanese Office Action for Japan Code Application No. 2015-517495, dated Feb. 9, 2016.
Nonfinal Office Action dated Apr. 21, 2017 for U.S. Appl. No. 13/791,379.
Non-Final Office Action for U.S. Appl. No. 13/790,311, dated Jun. 27, 2013.
Non-Final Office Action for U.S. Appl. No. 13/631,725, dated Mar. 16, 2015.
Non-Final Office Action for U.S. Appl. No. 13/631,725, dated Sep. 12, 2014.
Non-Final Office Action for U.S. Appl. No. 13/631,740, dated Oct. 21, 2014.
Non-Final Office Action for U.S. Appl. No. 13/631,812, dated Jun. 3, 2016.
Non-Final Office Action for U.S. Appl. No. 13/631,812, dated Mar. 28, 2014.
Non-Final Office Action for U.S. Appl. No. 13/790,311, dated Feb. 26, 2014.
Non-Final Office Action for U.S. Appl. No. 13/790,311, dated Sep. 9, 2014.
Non-Final Office Action for U.S. Appl. No. 13/790,320, dated Jun. 18, 2014.
Non-Final Office Action for U.S. Appl. No. 13/791,379, dated Mar. 27, 2014.
Non-Final Office Action for U.S. Appl. No. 13/791,420, dated Mar. 27, 2014.
Non-Final Office Action for U.S. Appl. No. 13/792,664, dated Dec. 6, 2018.
Non-Final Office Action for U.S. Appl. No. 13/792,664, dated Jun. 23, 2014.
Non-Final Office Action for U.S. Appl. No. 15/640,483, dated Oct. 4, 2018.
Non-Final Office Action for U.S. Appl. No. 15/650,729, dated Aug. 2, 2018.
Non-Final Office Action for U.S. Appl. No. 15/650,755, dated Aug. 2, 2018.
Non-Final Office Action for U.S. Appl. No. 15/838,065, dated Nov. 7, 2018.
Non-Final Office Action for U.S. Appl. No. 16/416,060, dated Oct. 2, 2020.
Non-Final Office Action for U.S. Appl. No. 16/445,093, dated Jul. 28, 2020.
Non-Final Office Action for U.S. Appl. No. 13/792,664, dated Jul. 31, 2017.
Non-Final Office Action for U.S. Appl. No. 13/791,434, dated Nov. 26, 2014.
Non-Final Office Action for U.S. Appl. No. 13/631,740, dated Feb. 27, 2015.
Non-Final Office Action for U.S. Appl. No. 13/631,740, dated Sep. 30, 2016.
Non-Final Office Action for U.S. Appl. No. 13/631,785, dated May 21, 2015.
Non-Final Office Action for U.S. Appl. No. 13/631,785, dated Nov. 3, 2016.
Non-Final Office Action for U.S. Appl. No. 13/631,803, dated Sep. 17, 2015.
Non-Final Office Action for U.S. Appl. No. 13/631,812, dated Jan. 18, 2017.
Non-Final Office Action for U.S. Appl. No. 13/790,311, dated Nov. 19, 2015.
Non-Final Office Action for U.S. Appl. No. 13/790,320, dated Jul. 28, 2015.
Non-Final Office Action for U.S. Appl. No. 13/791,379, dated Jul. 1, 2016.
Non-Final Office Action for U.S. Appl. No. 13/791,420, dated Apr. 9, 2015.
Non-Final Office Action for U.S. Appl. No. 14/183,351, dated May 11, 2015.
Non-Final Office Action for U.S. Appl. No. 15/019,891, dated May 6, 2016.
Non-Final Office Action for U.S. Appl. No. 15/225,361, dated Oct. 21, 2016.
Non-Final Office Action for U.S. Appl. No. 13/631,785, dated Oct. 22, 2014.
Non-Final Office Action for U.S. Appl. No. 13/631,803, dated Oct. 14, 2014.
Non-Final Office Action for U.S. Appl. No. 13/791,379, dated Oct. 16, 2014.
Non-Final/Final Office Action for U.S. Appl. No. 13/792,664, dated Jul. 31, 2017.
Non-Final/Final Office Action for U.S. Appl. No. 13/792,664, dated Apr. 6, 2018.
Non-Final/Final Office Action for U.S. Appl. No. 15/937,531, dated Jul. 9, 2019.
Notice of Allowance for U.S. Appl. No. 13/631,812, dated Aug. 9, 2017.
Notice of Allowance for U.S. Appl. No. 13/791,379, dated Nov. 9, 2017.
Notice of Allowance for U.S. Appl. No. 13/792,664, dated Apr. 26, 2019.
Notice of Allowance for U.S. Appl. No. 15/640,483, dated Jan. 17, 2019.
Notice of Allowance for U.S. Appl. No. 15/650,755, dated Aug. 23, 2019.
Notice of Allowance for U.S. Appl. No. 15/838,065, dated Feb. 25, 2019.
Notice of Allowance for U.S. Appl. No. 15/937,531, dated Jan. 23, 2020.
Notice of Allowance for U.S. Appl. No. 16/445,093, dated Nov. 16, 2020.
Notice of Allowance for U.S. Appl. No. 16/650,729, dated Month Day, Year.
Notice of Allowance for U.S. Appl. No. 13/631,725, dated Oct. 6, 2015.
Notice of Allowance for U.S. Appl. No. 13/631,740, dated Mar. 16, 2017.
Notice of Allowance for U.S. Appl. No. 13/631,785, dated Feb. 27, 2017.
Notice of Allowance for U.S. Appl. No. 13/631,803, dated Sep. 17, 2015.
Notice of Allowance for U.S. Appl. No. 13/790,311, dated Mar. 30, 2017.
Notice of Allowance for U.S. Appl. No. 13/790,320, dated Dec. 5, 2016.
Notice of Allowance for U.S. Appl. No. 14/183,351, dated Oct. 5, 2015.
Notice of Allowance for U.S. Appl. No. 15/019,891, dated Jan. 26, 2017.
Official Action dated Nov. 19 for Brazilian Patent Application No. BR102013022028-0.
PCT International Search Report and Written Opinion for International Application No. PCT/US2013/061023, dated Jan. 23, 2014.
PCT International Search Report and Written Opinion for International Application No. PCT/US2013/061029, dated Jan. 23, 2014.
Playstation2, 2004, Grand Theft Auto—San Andreas.
U.S. Appl. No. 61/666,628, entitled “Adding Triggers to Cloud-Based Emulated Games” to Victor Octav Suba Miura et al., filed Jun. 30, 2013.
U.S. Appl. No. 61/666,645, entitled “Haptic Enhancements for Emulated Video Game Not Originally Designed With Haptic Capabilities” to Victor Octav Suba Miura, et al., filed Jun. 29, 2012.
U.S. Appl. No. 61/666,665, entitled “Conversion of Haptic Events Into Screen Events” to Brian Michael, et al., filed Jun. 30, 2013.
U.S. Appl. No. 61/666,679, entitled “Suspending State of Cloud-Based Legacy Application” to Jacob P. Stine et al., filed Jun. 30, 2013.
U.S. Appl. No. 61/666,628, to Victor Octav Suba Miura, filed Jun. 29, 2012.
Final Office Action for U.S. Appl. No. 16/889,597, dated Dec. 13, 2021.
Thin Client—Wikipedia (Retrieved from https://en.wikipedia.org/wiki/Thin_client) last edited Oct. 10, 2021, and retrieved on Dec. 1, 2021.
U.S. Appl. No. 61/666,645, to Victor Octav Suba Miura, filed Jun. 29, 2012.
U.S. Appl. No. 61/666,665, to Brian Michael Christopher Watson, filed Jun. 29, 2012.
U.S. Appl. No. 61/666,679, to Jacob P. Stine, filed Jun. 29, 2012.
U.S. Appl. No. 61/694,718, to Gary M. Zalewski, filed Aug. 29, 2012.
U.S. Appl. No. 61/794,811, to Victor Octav Suba Miura, filed Mar. 15, 2013.
Final Office Action for U.S. Appl. No. 16/889,597, dated Oct. 21, 2022.
Non-Final Office Action for U.S. Appl. No. 17/328,955, dated Oct. 4, 2022.
Non-Final/Final Office Action for U.S. Appl. No. 16/889,597, dated May 12, 2022.
Non-Final Office Action for U.S. Appl. No. 16/889,597, dated Jun. 24, 2021.
Chinese Office Action for Chiense Application No. 201910313764.9, dated Apr. 24, 2023.
Notice of Allowance for U.S. Appl. No. 16/889,597, dated Mar. 29, 2023.
Related Publications (1)
Number Date Country
20210162295 A1 Jun 2021 US
Continuations (3)
Number Date Country
Parent 16445093 Jun 2019 US
Child 17174606 US
Parent 15838065 Dec 2017 US
Child 16445093 US
Parent 13631812 Sep 2012 US
Child 15838065 US