This application is related to commonly-assigned, provisional application Ser. No. 61/666,628, filed Jun. 29, 2012, and entitled “DETERMINING TRIGGERS FOR CLOUD-BASED EMULATED GAMES”, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, application Ser. No. 13/790,311, filed Mar. 8, 2013 (now U.S. Patent Application Publication Number 2014/0004956), and entitled “ADDING TRIGGERS TO CLOUD-BASED EMULATED GAMES” to Victor Octav Suba Miura, Brian Michael Christopher Watson, Jacob P. Stine, and Nicholas J. Cardell, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, provisional application Ser. No. 61/666,645, filed Jun. 29, 2012, and entitled “HAPTIC ENHANCEMENTS FOR EMULATED VIDEO GAME NOT ORIGINALLY DESIGNED WITH HAPTIC CAPABILITIES”, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, application Ser. No. 13/791,434, filed Mar. 8, 2013 (now U.S. Patent Application Publication Number 2014/0004949), and entitled “HAPTIC ENHANCEMENTS FOR EMULATED VIDEO GAME NOT ORIGINALLY DESIGNED WITH HAPTIC CAPABILITIES” to Victor Octav Suba Miura and Brian Michael Christopher Watson, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, provisional application Ser. No. 61/666,665, filed Jun. 29, 2012, and entitled “CONVERSION OF HAPTIC EVENTS INTO SCREEN EVENTS”, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, application Ser. No. 13/791,420, filed Mar. 8, 2013 (now U.S. Patent Application Publication Number 2014/0004941), and entitled “CONVERSION OF HAPTIC EVENTS INTO SCREEN EVENTS” to Brian Michael Christopher Watson and Victor Octav Suba Miura, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, provisional application Ser. No. 61/666,679, filed Jun. 29, 2012, and entitled “SUSPENDING STATE OF CLOUD-BASED LEGACY APPLICATIONS”, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, application Ser. No. 13/791,379, filed Mar. 8, 2013 (now U.S. Patent Application Publication Number 2014/0004957), and entitled “SUSPENDING STATE OF CLOUD-BASED LEGACY APPLICATIONS” to Jacob P. Stine, Brian Michael Christopher Watson, Victor Octav Suba Miura, and Nicholas J. Cardell, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, application Ser. No. 13/631,725, filed Sep. 28, 2012, and entitled “REPLAY AND RESUMPTION OF SUSPENDED GAME” to Brian Michael Christopher Watson, Victor Octav Suba Miura, Jacob P. Stine and Nicholas J. Cardell, filed Sep. 28, 2012 (now U.S. Pat. No. 9,248,374), the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, application Ser. No. 13/631,740, filed Sep. 28, 2012 (now U.S. Patent Application Publication Number 2014/0094314), and entitled “METHOD FOR CREATING A MINI-GAME” to Brian Michael Christopher Watson, Victor Octav Suba Miura, and Jacob P. Stine, the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, application Ser. No. 13/631,785, filed Sep. 28, 2012 (now U.S. Patent Application Publication Number 2014/0094315), and entitled “PRE-LOADING TRANSLATED CODE IN CLOUD BASED EMULATED APPLICATIONS”, to Jacob P. Stine, Victor Octav Suba Miura, Brian Michael Christopher Watson, and Nicholas J. Cardell the entire disclosures of which are incorporated herein by reference.
This application is related to commonly-assigned, application Ser. No. 13/631,812, filed Sep. 28, 2012 (now U.S. Patent Application Publication Number 2014/0094299), entitled “METHOD AND APPARATUS FOR IMPROVING EFFICIENCY WITHOUT INCREASING LATENCY IN EMULATION OF A LEGACY APPLICATION TITLE”, to Jacob P. Stine and Victor Octav Suba Miura, the entire disclosures of which are incorporated herein by reference.
The present disclosure is related to video game emulation. Among other things, this application describes a method and apparatus for emulating a graphics processing unit (GPU) over a cloud based network with tile-based rasterization.
In three dimensional graphics rendering, a graphics processing unit (GPU) may transform a three-dimensional virtual object into a two-dimensional image that may be displayed on a screen. The GPU may use one or more graphics pipelines for processing information initially provided to the GPU, such as graphics primitives. Graphics primitives are properties that are used to describe a three-dimensional object that is being rendered. By way of example, graphics primitives may be lines, triangles, or vertices that form a three dimensional object when combined. Each of the graphics primitives may contain additional information to further define the three dimensional object such as, but not limited to X-Y-Z coordinates, red-green-blue (RGB) values, translucency, texture, and reflectivity.
A critical step in a graphics pipeline is the rasterization step. Rasterization is the process by which the graphics primitives describing the three-dimensional object are transformed into a two-dimensional image representation of the scene. The two-dimensional image is comprised of individual pixels, each of which may contain unique RGB values. Typically, the GPU will rasterize a three-dimensional image by stepping across the entire three-dimensional object in raster pattern along a two dimensional plane. Each step along the line represents one pixel. At each step, the GPU must determine if the pixel should be rendered and delivered to the frame buffer. If the pixel has not changed from a previous rendering, then there is no need to deliver an updated pixel to the frame buffer. Therefore, each raster line may have a variable number of pixels that must be processed. In order to quickly process the three-dimensional object, a plurality of rasterization threads may each be assigned one or more of the raster lines to process, and the rasterization threads may be executed in parallel.
When a GPU is being emulated through software, the processing capabilities may not be as efficient or as highly optimized as they would be in the original hardware based GPU. Therefore, if the processing load on each rasterization thread is not properly balanced, a delay or latency in the execution of the rasterization may develop. Further, it is difficult to predict the number of pixels that will be rendered along each raster line before it is processed. Without knowing a priori the processing load each rasterization thread is assigned, it is difficult to ensure that load can be evenly balanced.
In order to prevent latencies, the emulation software may dedicate an increased number of available rasterization threads to the rasterization process. This increases the demand on the processor running the emulation software. Also, in the case of cloud-based services, the number of instances of the emulation software that will be running at a given time is not known beforehand. If the emulation software requires extensive processing power, then scaling the system for increased users becomes prohibitively expensive. By way of example, during peak usage hours, there may be many instances of the emulator being executed on the network. This requires that resources such as processing power be used as efficiently as possible.
Further, the efficiency of the processing power cannot be made by decreasing the frame rate that the emulator is capable of producing. The frame rate should ideally remain above 24 frames per second in order to ensure smooth animation. In order to provide a scalable software emulator of a GPU that is implemented over a cloud-based network, a rasterization method that allows for efficient load balancing is needed.
It is within this context that aspects of the present disclosure arise.
Although the following detailed description contains many specific details for the purposes of illustration, anyone of ordinary skill in the art will appreciate that many variations and alterations to the following details are within the scope of the present disclosure. Accordingly, the aspects of the present disclosure described below are set forth without any loss of generality to, and without imposing limitations upon, the claims that follow this description.
Aspects of the present disclosure describe a software based emulator of a graphics processing unit (GPU) that is configured to operate over a cloud-based network. A virtual image containing graphics primitives is first divided into a plurality of tiles. Each of the tiles has a predetermined number of image pixels. The emulator may then scan each of the tiles to determine how many of the image pixels in each tile need to be rendered. The number of pixels that need to be rendered for each tile is then delivered to a load balancer. The load balancer distributes the processing between rasterization threads. Each rasterization thread will be assigned approximately the same total number of pixels to be rendered. The rasterization threads then rasterize their assigned tiles, and render the pixels that require rendering. Additionally, the rasterization threads may deliver the rendered pixels to a frame buffer. The frame buffer builds a frame from the rendered pixels and then delivers the frame over the network to a client device platform.
Additional aspects of the present disclosure describe a software based emulator of a GPU that is configured to operate over a cloud-based network. A virtual image containing graphics primitives is first divided into a plurality of tiles. Each of the tiles has a predetermined number of image pixels. The emulator may then scan each of the tiles to determine if any of the image pixels that are within a tile need to be rendered. Pixels that do not need to be rendered are sometimes referred to herein as “ignorable” pixels. If at least one image pixel in a tile needs to be rendered, then a message is sent to a load balancer indicating that the tile is “full”. Once each tile has been scanned, the load balancer can divide the “full” tiles evenly between the available rasterization threads. Each rasterization thread then rasterizes the assigned tiles and delivers the rendered pixels to a frame buffer. The frame buffer builds a frame from the rendered pixels and then delivers the frame over the network to a client device platform.
Client device platform 103 may include a central processor unit (CPU) 131. By way of example, a CPU 131 may include one or more processors, which may be configured according to, e.g., a dual-core, quad-core, multi-core, or Cell processor architecture. Snapshot generator 102 may also include a memory 132 (e.g., RAM, DRAM, ROM, and the like). The CPU 131 may execute a process-control program 133, portions of which may be stored in the memory 132. The client device platform 103 may also include well-known support circuits 140, such as input/output (I/O) circuits 141, power supplies (P/S) 142, a clock (CLK) 143 and cache 144. The client device platform 103 may optionally include a mass storage device 134 such as a disk drive, CD-ROM drive, tape drive, or the like to store programs and/or data. The client device platform 103 may also optionally include a display unit 137 and a user interface unit 138 to facilitate interaction between the client device platform 103 and a user. The display unit 137 may be in the form of a cathode ray tube (CRT) or flat panel screen that displays text, numerals, or graphical symbols. The user interface unit 138 may include a keyboard, mouse, joystick, light pen, or other device. A controller 145 may be connected to the client device platform 103 through the I/O circuit 141 or it may be directly integrated into the client device platform 103. The controller 145 may facilitate interaction between the client device platform 103 and a user. The controller 145 may include a keyboard, mouse, joystick, light pen, hand-held controls or other device. The controller 145 may be capable of generating a haptic response 146. By way of example and not by way of limitation, the haptic response 146 may be vibrations or any other feedback corresponding to the sense of touch. The client device platform 103 may include a network interface 139, configured to enable the use of Wi-Fi, an Ethernet port, or other communication methods.
The network interface 139 may incorporate suitable hardware, software, firmware or some combination of two or more of these to facilitate communication via an electronic communications network 160. The network interface 139 may be configured to implement wired or wireless communication over local area networks and wide area networks such as the Internet. The client device platform 103 may send and receive data and/or requests for files via one or more data packets over the network 160.
The preceding components may exchange signals with each other via an internal system bus 150. The client device platform 103 may be a general purpose computer that becomes a special purpose computer when running code that implements embodiments of the present invention as described herein.
The emulator 107 may include a central processor unit (CPU) 131′. By way of example, a CPU 131′ may include one or more processors, which may be configured according to, e.g., a dual-core, quad-core, multi-core, or Cell processor architecture. The emulator 107 may also include a memory 132′ (e.g., RAM, DRAM, ROM, and the like). The CPU 131′ may execute a process-control program 133′, portions of which may be stored in the memory 132′. The emulator 107 may also include well-known support circuits 140′, such as input/output (I/O) circuits 141′, power supplies (P/S) 142′, a clock (CLK) 143′ and cache 144′. The emulator 107 may optionally include a mass storage device 134′ such as a disk drive, CD-ROM drive, tape drive, or the like to store programs and/or data. The emulator 107 may also optionally include a display unit 137′ and user interface unit 138′ to facilitate interaction between the emulator 107 and a user who requires direct access to the emulator 107. By way of example and not by way of limitation a snapshot generator or engineer 102 may need direct access to the emulator 107 in order to program the emulator 107 to properly emulate a desired legacy game 106 or to add additional mini-game capabilities to a legacy game 106. The display unit 137′ may be in the form of a cathode ray tube (CRT) or flat panel screen that displays text, numerals, or graphical symbols. The user interface unit 138′ may include a keyboard, mouse, joystick, light pen, or other device. The emulator 107 may include a network interface 139′, configured to enable the use of Wi-Fi, an Ethernet port, or other communication methods.
The network interface 139′ may incorporate suitable hardware, software, firmware or some combination of two or more of these to facilitate communication via the electronic communications network 160. The network interface 139′ may be configured to implement wired or wireless communication over local area networks and wide area networks such as the Internet. The emulator 107 may send and receive data and/or requests for files via one or more data packets over the network 160.
The preceding components may exchange signals with each other via an internal system bus 150′. The emulator 107 may be a general purpose computer that becomes a special purpose computer when running code that implements embodiments of the present invention as described herein.
Emulator 107 may access a legacy game 106 that has been selected by the client device platform 103 for emulation through the internal system bus 150′. There may be more than one legacy game 106 stored in the emulator. The legacy games may also be stored in the memory 132′ or in the mass storage device 134′. Additionally, one or more legacy games 106 may be stored at a remote location accessible to the emulator 107 over the network 160. Each legacy game 106 contains game code 108. When the legacy game 106 is emulated, the game code 108 produces legacy game data 109.
By way of example, a legacy game 106 may be any game that is not compatible with a target platform. By way of example and not by way of limitation, the legacy game 106 may have been designed to be played on Sony Computer Entertainment's PlayStation console, but the target platform is a home computer. By way of example, the legacy game 106 may have been designed to be played on a PlayStation 2 console, but the target platform is a PlayStation 3 console. Further, by way of example and not by way of limitation, a legacy game 106 may have been designed to be played on a PlayStation console, but the target platform is a hand held console such as the PlayStation Vita from Sony Computer Entertainment.
Emulator 107 may be a deterministic emulator. A deterministic emulator is an emulator that may process a given set of game inputs the same way every time that the same set of inputs are provided to the emulator 107. This may be accomplished by eliminating any dependencies in the code run by the emulator 107 that depend from an asynchronous activity. Asynchronous activities are events that occur independently of the main program flow. This means that actions may be executed in a non-blocking scheme in order to allow the main program flow to continue processing. Therefore, by way of example, and not by way of limitation, the emulator 107 may be deterministic when the dependencies in the code depend from basic blocks that always begin and end with synchronous activity. By way of example, basic blocks may be predetermined increments of code at which the emulator 107 checks for external events or additional game inputs. The emulator 107 may also wait for anything that runs asynchronously within a system component to complete before proceeding to the next basic block. A steady state within the emulator 107 may be when all of the basic blocks are in lock step.
Once the virtual image 320 has been divided into the tiles 315, method 200 continues with the emulator 107 determining which tiles 315 have pixels that need to be rendered at 262. Each tile 315 will be scanned by the emulator 107 to determine how many of the pixels within the tile 315 need to be rendered. A pixel needs to be rendered if the value of the new pixel for the frame 319 being rasterized is different from the value of the pixel presently stored in the frame buffer 318. Otherwise, the pixel is “ignorable”. By way of example, and not by way of limitation, a pixel value may include X-Y-Z coordinates, RGB values, translucency, texture, reflectivity or any combination thereof. The number of pixels that need to be rendered for a given tile 315 may then be delivered to the load balancer 317 at 263.
By way of example, and not by way of limitation, the emulator 107 may determine how many pixels need to be rendered for each tile by determining whether the tile is entirely within a polygon. Each polygon is defined by the vertices. Two vertices of a polygon may be used to generate a line equation in the form of Ax+By+C=0. Each polygon may be made up of multiple lines. Once the size and location of the polygon has been defined, the emulator 107 may determine whether all corners of the tile lie within the polygon. If all four corners are within the polygon, then that tile is fully covered and it may be easy to apply a texture or calculate RGB values from the top left corner pixel value. If the tile is partially outside the polygon then the pixel values are determined on a per-pixel basis.
The load balancer 317 begins assigning tiles 315 to one or more rasterization threads 316 for rasterization at 264. Load balancer 317 distributes the processing load amongst the available rasterization threads 316 so that each thread 316 has approximately the same processing load. Ideally, the load balancer 317 will distribute the tiles 315 such that each rasterization thread 316 will render the same number of pixels.
According to method 200 the rasterization threads 316 begin rasterizing the tiles 315 assigned to them by the load balancer 317 at 265. The rasterization proceeds according to a traditional raster pattern, except that it is limited to the dimensions of a single tile 315. During the rasterization, every pixel that must be rendered is delivered to the frame buffer 318 at 266. The frame buffer 318 may then build the frame 319 that will be displayed on the display 137 of the client device platform 103 at 267. At 268, the emulator 103 delivers the frame 318 to the client device platform 103 over the network 160. Additionally, the emulator 103 may use a video codec to encode the frame 319 before delivering it to the client device platform 103. The client device platform 103 may have suitable codec configured to decode the encoded frame 319.
As shown in
By way of example, in a static load balancing arrangement hardware (e.g., Power VR) statically assigns responsibility for different tiles to different processors. The assignment number is equal to the processor core number. However, in a dynamic case, there are multiple asynchronous threads, e.g., four threads, but not as many threads as queues. A queue is a group of tiles that need to be processed. Each queue can have a state ID that allows state to be maintained. The state for an arbitrary number of tiles may be stored separately, e.g., in a different buffer. Storing the states separately reduces the amount of memory copying that needs to be done. By way of example, there may be one or more queues. The load balancer 317 may then assign an empty thread to a queue that is waiting for rendering. This maintains cache locality by keeping the threads occupied.
Next at 476, the emulator 107 may be instructed to have the rasterization threads 316 begin rasterizing each of the tiles 315. During the rasterization, the emulator 107 may be instructed to deliver the rendered pixels to the frame buffer 318 at 477. The emulator 107 may then be instructed to generate the frame 319 from the pixels in the frame buffer 318. Thereafter, the emulator 107 may be provided with instructions for delivering the frame 319 to a client device platform 103 over a network 160.
Once the virtual image 320 has been divided into the tiles 315, method 201 continues with the emulator 107 determining if any pixels need to be rendered for each tile at 272. If there is at least one pixel that needs to be rendered in a tile 315, then that tile may be designated as a “full” tile 315. If there are no pixels that need to be rendered in a tile 315 (i.e., all pixels in the tile are ignorable), then that tile may be designated as an “empty” tile 315. A “full” designation will be interpreted by the load balancer 317 as indicating that all pixels in the tile 315 need to be rendered, and an “empty” designation will be interpreted by the load balancer 317 as indicating that none of the pixels in the tile 315 need to be rendered. The use of “empty” and “full” designations may improve the scanning speed of the emulator 107 because each tile 315 does not need to be completely scanned. Once a single pixel that requires rendering is detected, the scan of the tile 315 may be ceased. The identification of which tiles 315 are “full” may then be delivered to the load balancer 317 at 273.
The load balancer 317 begins assigning “full” tiles 315 to one or more rasterization threads 316 for rasterization at 274. Load balancer 317 distributes the processing load amongst the available rasterization threads 316 so that each thread 316 has approximately the same processing load. Ideally, the load balancer 317 will distribute the tiles 315 such that each rasterization thread 316 will render the same number of pixels.
According to method 200 the rasterization threads 316 begin rasterizing the tiles 315 assigned to them by the load balancer 317 at 275. The rasterization proceeds according to a traditional raster pattern, except that it is limited to the dimensions of a single tile 315. During the rasterization, every pixel that must be rendered is delivered to the frame buffer 318 at 276. The frame buffer 318 may then build the frame 319 that will be displayed on the display 137 of the client device platform 103 at 277. At 278, the emulator 103 delivers the frame 318 to the client device platform 103 over the network 160. Additionally, the emulator 103 may use a video codec to encode the frame 319 before delivering it to the client device platform 103. The client device platform 103 may have suitable codec configured to decode the encoded frame 319.
As shown in
As may be seen from the foregoing, certain aspects of the present disclosure may be used to facilitate distribution of the processing load for rasterization of a virtual image containing graphics primitives through the use of tiling. Tiling makes it possible to determine the processing loads that need to be distributed.
While the above is a complete description of the preferred embodiment of the present invention, it is possible to use various alternatives, modifications and equivalents. Therefore, the scope of the present invention should be determined not with reference to the above description but should, instead, be determined with reference to the appended claims, along with their full scope of equivalents. Any feature described herein, whether preferred or not, may be combined with any other feature described herein, whether preferred or not. In the claims that follow, the indefinite article “A”, or “An” refers to a quantity of one or more of the item following the article, except where expressly stated otherwise. The appended claims are not to be interpreted as including means-plus-function limitations, unless such a limitation is explicitly recited in a given claim using the phrase “means for.”
This application is a continuation of commonly-assigned, application Ser. No. 13/631,803, filed Sep. 28, 2012 (now U.S. Patent Application Publication Number 2014/0092087, the entire disclosures of which are incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
6009458 | Hawkins et al. | Dec 1999 | A |
6280323 | Yamazaki et al. | Aug 2001 | B1 |
6402620 | Naghi | Jun 2002 | B1 |
6699127 | Lobb et al. | Mar 2004 | B1 |
7159008 | Wies et al. | Jan 2007 | B1 |
7286132 | Kuhne | Oct 2007 | B2 |
7470196 | Joshi | Dec 2008 | B1 |
7493365 | Wies et al. | Feb 2009 | B2 |
7841946 | Walker et al. | Nov 2010 | B2 |
8085264 | Crow | Dec 2011 | B1 |
8267796 | Iwakiri | Sep 2012 | B2 |
8321571 | Crowder, Jr. et al. | Nov 2012 | B2 |
8661496 | Perlman et al. | Feb 2014 | B2 |
8935487 | Sengupta et al. | Jan 2015 | B2 |
9248374 | Watson et al. | Feb 2016 | B2 |
9258012 | Miura | Feb 2016 | B2 |
20020002510 | Sharp et al. | Jan 2002 | A1 |
20020045484 | Eck et al. | Apr 2002 | A1 |
20020065915 | Anderson et al. | May 2002 | A1 |
20030037030 | Dutta et al. | Feb 2003 | A1 |
20030190950 | Matsumoto | Oct 2003 | A1 |
20030225560 | Garcia et al. | Dec 2003 | A1 |
20040179019 | Sabella | Sep 2004 | A1 |
20040224772 | Canessa et al. | Nov 2004 | A1 |
20040266529 | Chatani | Dec 2004 | A1 |
20050195187 | Seiler et al. | Sep 2005 | A1 |
20050288954 | McCarthy et al. | Dec 2005 | A1 |
20060009290 | Taho et al. | Jan 2006 | A1 |
20060080702 | Diez et al. | Apr 2006 | A1 |
20060117260 | Sloo et al. | Jun 2006 | A1 |
20060146057 | Blythe | Jul 2006 | A1 |
20060160626 | Gatto et al. | Jul 2006 | A1 |
20060259292 | Solomon et al. | Nov 2006 | A1 |
20070060361 | Nguyen et al. | Mar 2007 | A1 |
20070298866 | Gaudiano et al. | Dec 2007 | A1 |
20080032794 | Ware et al. | Feb 2008 | A1 |
20080113749 | Williams et al. | May 2008 | A1 |
20080282241 | Dong | Nov 2008 | A1 |
20080300053 | Muller | Dec 2008 | A1 |
20090082102 | Sargaison et al. | Mar 2009 | A1 |
20090088236 | Laude et al. | Apr 2009 | A1 |
20090098943 | Weber et al. | Apr 2009 | A1 |
20090162029 | Glen | Jun 2009 | A1 |
20090282139 | Mejdrich et al. | Nov 2009 | A1 |
20090303245 | Soupikov | Dec 2009 | A1 |
20100088296 | Periyagaram et al. | Apr 2010 | A1 |
20100250650 | Allen | Sep 2010 | A1 |
20100259536 | Toksvig et al. | Oct 2010 | A1 |
20110013699 | Persson | Jan 2011 | A1 |
20110098111 | Saito et al. | Apr 2011 | A1 |
20110218037 | Singh | Sep 2011 | A1 |
20110299105 | Morrison | Dec 2011 | A1 |
20120021840 | Johnson et al. | Jan 2012 | A1 |
20120052930 | Mcgucken | Mar 2012 | A1 |
20120142425 | Scott et al. | Jun 2012 | A1 |
20120299940 | Dietrich et al. | Nov 2012 | A1 |
20130137518 | Lucas | May 2013 | A1 |
20140004941 | Watson et al. | Jan 2014 | A1 |
20140004949 | Miura et al. | Jan 2014 | A1 |
20140004956 | Miura et al. | Jan 2014 | A1 |
20140004957 | Stine et al. | Jan 2014 | A1 |
20140004962 | Miura et al. | Jan 2014 | A1 |
20140066177 | Zalewski | Mar 2014 | A1 |
20140092087 | Kazama et al. | Apr 2014 | A1 |
20140094299 | Stine et al. | Apr 2014 | A1 |
20140094313 | Watson et al. | Apr 2014 | A1 |
20140094314 | Watson et al. | Apr 2014 | A1 |
20140094315 | Stine et al. | Apr 2014 | A1 |
Number | Date | Country |
---|---|---|
2014052205 | Apr 2014 | WO |
Entry |
---|
European Search Report and Written Opinion for EP Application No. 13841130.1, dated Oct. 4, 2016. |
Non-Final Office Action for U.S. Appl. No. 13/631,785, dated Nov. 3, 2016. |
Non-Final Office Action for U.S. Appl. No. 13/631,812, dated Jan. 18, 2017. |
Notice of Allowance for U.S. Appl. No. 13/790,320, dated Dec. 5, 2016. |
Non-Final Office Action for U.S. Appl. No. 13/631,785, dated Oct. 22, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/631,803, dated Oct. 14, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/791,379, dated Oct. 16, 2014. |
U.S. Appl. No. 61/666,628, entitled “Adding Triggers to Cloud-Based Emulated Games” to Victor Octav Suba Miura et al., filed Jun. 30, 2013. |
U.S. Appl. No. 61/666,645, entitled “Haptic Enhancements for Emulated Video Game Not Originally Designed With Haptic Capabilities” to Victor Octav Suba Miura, et al., filed Jun. 29, 2012. |
U.S. Appl. No. 61/666,665, entitled “Conversion of Haptic Events Into Screen Events” to Brian Michael, et al., filed Jun. 30, 2013. |
U.S. Appl. No. 61/666,679, entitled “Suspending State of Cloud-Based Legacy Application” to Jacob P. Stine et al., filed Jun. 30, 2013. |
U.S. Appl. No. 61/666,628, to Victor Octav Suba Miura, filed Jun. 29, 2012. |
U.S. Appl. No. 61/666,645, to Victor Octav Suba Miura, filed Jun. 29, 2012. |
U.S. Appl. No. 61/666,665, to Brian Michael Christopher Watson, filed Jun. 29, 2012. |
U.S. Appl. No. 61/666,679, to Jacob P. Stine, filed Jun. 29, 2012. |
U.S. Appl. No. 61/694,718, to Gary M. Zalewski, filed Aug. 29, 2012. |
U.S. Appl. No. 61/794,811, to Victor Octav Suba Miura, filed Mar. 15, 2013. |
Final Office Action for U.S. Appl. No. 13/631,725, dated Dec. 19, 2014. |
Final Office Action for U.S. Appl. No. 13/631,740, dated Jul. 27, 2015. |
Final Office Action for U.S. Appl. No. 13/631,785, dated Dec. 4, 2015. |
Final Office Action for U.S. Appl. No. 13/631,803, dated Feb. 1, 2016. |
Final Office Action for U.S. Appl. No. 13/631,812, dated Aug. 29, 2014. |
Final Office Action for U.S. Appl. No. 13/790,311, dated Jul. 15, 2016. |
Final Office Action for U.S. Appl. No. 13/790,320, dated Feb. 10, 2016. |
Final Office Action for U.S. Appl. No. 13/790,320, dated Jan. 15, 2015. |
Final Office Action for U.S. Appl. No. 13/791,379, dated May 13, 2015. |
Final Office Action for U.S. Appl. No. 13/791,420, dated Jun. 11, 2014. |
Final Office Action for U.S. Appl. No. 13/791,420, dated Oct. 9, 2015. |
Final Office Action for U.S. Appl. No. 13/791,434, dated Feb. 17, 2016. |
Final Office Action for U.S. Appl. No. 13/791,434, dated Jun. 23, 2015. |
Final Office Action for U.S. Appl. No. 13/792,664, dated Jan. 20, 2015. |
Final Office Action for U.S. Appl. No. 13/792,664, dated Jun. 17, 2016. |
Final Office Action for U.S. Appl. No. 15/019,891, dated Oct. 19, 2016. |
Final Office Action for U.S. Appl. No. 13/631,803, dated Apr. 16, 2015. |
Final Office Action for U.S. Appl. No. 13/790,311, dated Mar. 27, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/790,311, dated Jun. 27, 2013. |
Non-Final Office Action for U.S. Appl. No. 13/631,725, dated Mar. 16, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/631,725, dated Sep. 12, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/631,740, dated Oct. 21, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/631,812, dated Jun. 3, 2016. |
Non-Final Office Action for U.S. Appl. No. 13/631,812, dated Mar. 28, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/790,311, dated Feb. 26, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/790,311,dated Sep. 9, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/790,320, dated Jun. 18, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/791,379, dated Mar. 27, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/791,420, dated Mar. 27, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/792,664, dated Jun. 23, 2014. |
Non-Final Office Action for U.S. Appl. No. 13/791,434, dated Nov. 26, 2014 |
Non-Final Office Action for U.S. Appl. No. 13/631,740, dated Feb. 27, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/631,740, dated Sep. 30, 2016. |
Non-Final Office Action for U.S. Appl. No. 13/631,785, dated May 21, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/631,803, dated Sep. 17, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/790,311, dated Nov. 19, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/790,320, dated Jul. 28, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/791,379, dated Jul. 1, 2016. |
Non-Final Office Action for U.S. Appl. No. 13/791,420, dated Apr. 9, 2015. |
Non-Final Office Action for U.S. Appl. No. 13/792,664, dated Dec. 4, 2015. |
Non-Final Office Action for U.S. Appl. No. 14/183,351, dated May 11, 2015. |
Non-Final Office Action for U.S. Appl. No. 15/019,891, dated May 6, 2016. |
Notice of Allowance for U.S. Appl. No. 15/019,891, dated Jan. 26, 2017. |
Number | Date | Country | |
---|---|---|---|
20160364906 A1 | Dec 2016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13631803 | Sep 2012 | US |
Child | 15225361 | US |