System and method for entering and exiting sleep mode in a graphics subsystem

Information

  • Patent Grant
  • 10817043
  • Patent Number
    10,817,043
  • Date Filed
    Tuesday, July 26, 2011
    13 years ago
  • Date Issued
    Tuesday, October 27, 2020
    3 years ago
Abstract
A technique is disclosed for a graphics processing unit (GPU) to enter and exit a power saving deep sleep mode. The technique involves preserving processing state within local memory by configuring the local memory to operate in a self-refresh mode while the GPU is powered off for deep sleep. An interface circuit coupled to the local memory is configured to prevent spurious GPU signals from disrupting proper self-refresh of the local memory. Spurious GPU signals may result from GPU power down and GPU power up events associated with the GPU entering and exiting the deep sleep mode.
Description
BACKGROUND OF THE INVENTION
Field of the Invention

The invention relates generally to graphics processing systems and, more specifically, to a system and method for entering and exiting sleep mode in a graphics subsystem.


Description of the Related Art

Certain computer systems include a graphics processing unit (GPU) configured to perform computational tasks in cooperation with a central processing unit (CPU). During normal operation, the GPU may be assigned the computational tasks as needed. Data and program code related to the computational tasks are conventionally stored within a local memory system comprising one or more memory devices. Certain state information related to the computational tasks may be stored on the GPU. Between performing the computational tasks, the GPU may remain idle for predictable spans of time. During a span of idle time, the GPU may be put in a sleep mode to reduce power consumption. One type of sleep mode involves gating off a primary clock signal to one or more clock domains within the GPU. Gating off the primary clock signal can beneficially reduce dynamic power consumption. However, modern fabrication technology that enables the manufacture of advanced GPU devices with extremely dense circuitry inevitably introduces significant static power dissipation, which is present whenever the GPU device is powered on.


To address static power dissipation during spans of idle time, a second sleep mode, referred to herein as a deep sleep mode, involves actually shutting off power to the GPU. The deep sleep mode further reduces average power consumption by eliminating both dynamic and static power consumption associated with portions of the GPU circuitry that enter the deep sleep mode.


Prior to entering the deep sleep mode, operating state information for the GPU needs to be saved to system memory, which is configured to preserve the operating state information, which may include certain contents of the local memory as well as certain portions of internal GPU state. The operating state information needs to be restored within the GPU and local memory prior to the GPU resuming operation and immediately following an exit from the deep sleep mode. Each time the GPU is conventionally put into deep sleep, the operating state information is transmitted to a main memory associated with the CPU. Each time the GPU conventionally exists deep sleep, the operating state information is transmitted from the main memory to the GPU and local memory. Entering and exiting deep sleep involves transmitting significant amounts of state information between system memory and the GPU. As a consequence, use of the deep sleep mode can be very time consuming and lead to overall system performance degradation.


As the foregoing illustrates, what is needed in the art is an improved technique for entering and exiting a deep sleep mode in a graphics processing unit.


SUMMARY OF THE INVENTION

One embodiment of the present invention sets forth a method implemented by a graphics processing unit (GPU) for entering and exiting sleep mode. The method includes receiving a command to enter a sleep mode, saving internal processing state for the GPU to a memory system local to the GPU, causing at least one memory device included in the memory system to enter a self-refresh mode, and entering a power-down state.


Another embodiment of the present invention sets forth a computer-readable storage medium including instructions that, when executed by a processor, cause the processor to perform the method steps set forth above. Yet another embodiment of the present invention sets forth a computing device configured to implement the method steps set forth above.


One advantage of the disclosed technique is that a GPU may efficiently enter and exit a deep sleep power saving mode by leveraging low power self-refresh modes available from locally attached memory. By contrast, prior art systems do not benefit from maintaining GPU context within local memory.





BRIEF DESCRIPTION OF THE DRAWINGS

So that the manner in which the above recited features of the invention can be understood in detail, a more particular description of the invention, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this invention and are therefore not to be considered limiting of its scope, for the invention may admit to other equally effective embodiments.



FIG. 1 is a block diagram illustrating a computer system configured to implement one or more aspects of the present invention;



FIG. 2 illustrates communication signals between a parallel processing subsystem and various components of the computer system, according to one embodiment of the present invention;



FIG. 3A is a detailed illustration of an external conditioning circuit, according to one embodiment of the present invention;



FIG. 3B is a detailed illustration of an integrated conditioning circuit, according to one embodiment of the present invention;



FIG. 4A sets forth a flowchart of method steps for causing the parallel processing system to enter a deep sleep state, according to one embodiment of the present invention; and



FIG. 4B sets forth a flow chart of method steps for causing the parallel processing system to exit the deep sleep state, according to one embodiment of the present invention.





DETAILED DESCRIPTION

In the following description, numerous specific details are set forth to provide a more thorough understanding of the invention. However, it will be apparent to one of skill in the art that the invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in order to avoid obscuring the invention.


System Overview


FIG. 1 is a block diagram illustrating a computer system 100 configured to implement one or more aspects of the present invention. Computer system 100 includes a central processing unit (CPU) 102 and a system memory 104 configured to communicate via an interconnection path that may include a memory bridge 105. Memory bridge 105, which may be, e.g., a Northbridge chip, is connected via a bus or other communication path 106 (e.g., a HyperTransport link) to an I/O (input/output) bridge 107. I/O bridge 107, which may be, e.g., a Southbridge chip, receives user input from one or more user input devices 108 (e.g., keyboard, mouse) and forwards the input to CPU 102 via communications path 106 and memory bridge 105. A parallel processing subsystem 112 is coupled to memory bridge 105 via a bus or other communication path 113 (e.g., a PCI Express, Accelerated Graphics Port, or HyperTransport link). In one embodiment parallel processing subsystem 112 is a graphics subsystem that delivers pixels to a display device 110 (e.g., a conventional CRT or LCD based monitor). A parallel processing subsystem, driver 103 is configured to manage the parallel processing subsystem 112. The parallel processing subsystem driver 103 may be configured to send graphics primitives over communication path 113 for parallel processing subsystem 112 to generate pixel data for display on display device 110. A system disk 114 is also connected to I/O bridge 107. A switch 116 provides connections between I/O bridge 107 and other components such as a network adapter 118 and various add-in cards 120 and 121.


An embedded controller 150 is coupled to the parallel processing subsystem 112. In one embodiment, the embedded controller 150 is also coupled to the CPU 102 via an interconnect path that may include the memory bridge 105. Alternatively, the embedded controller 150 is coupled to the CPU 102 via the I/O bridge 107. As described in greater detail below, embedded controller 150 is configured to manage certain operational aspects of the parallel processing subsystem 112.


Other components (not explicitly shown), including universal serial bus (USB) connections or other port connections, CD drives, DVD drives, film recording devices, and the like, may also be connected to either the memory bridge 105, or the I/O bridge 107. Communication paths interconnecting the various components in FIG. 1 may be implemented using any suitable protocols, such as PCI (Peripheral Component Interconnect), PCI-Express, AGP (Accelerated Graphics Port), HyperTransport, or any other bus or point-to-point communication protocol(s). The connections between different devices may use any technically feasible protocols.


In one embodiment, the parallel processing subsystem 112 incorporates circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitutes a graphics processing unit (GPU). In another embodiment, the parallel processing subsystem 112 may be integrated with one or more other system elements, such as the memory bridge 105, CPU 102, and I/O bridge 107 to form a system on chip (SoC).


It will be appreciated that the system shown herein is illustrative and that variations and modifications are possible. The connection topology, including the number and arrangement of bridges, the number of CPUs 102, and the number of parallel processing subsystems 112, may be modified as needed for a specific implementation. For instance, in some embodiments, system memory 104 is connected to CPU 102 directly rather than through a bridge, and other devices communicate with system memory 104 via memory bridge 105 and CPU 102. In other alternative topologies, parallel processing subsystem 112 is connected to I/O bridge 107 or directly to CPU 102, rather than to memory bridge 105. In still other embodiments, I/O bridge 107 and memory bridge 105 are integrated into a single chip. Certain embodiments may include two or more CPUs 102 and two or more parallel processing systems 112. The particular components shown herein are optional; for instance, any number of add-in cards or peripheral devices might be supported. In some embodiments, switch 116 is eliminated, and network adapter 118 and add-in cards 120, 121 connect directly to I/O bridge 107.



FIG. 2 illustrates communication signals between parallel processing subsystem 112 and various components of computer system 100, according to one embodiment of the present invention. A detail of computer system 100 is shown, illustrating the embedded controller (EC) 150, an SPI flash device 256, a system basic input/output system (SBIOS) 252, and the driver 103. EC 150 may be an embedded controller that implements an advanced configuration and power interface (ACPI) that allows an operating system executing on CPU 102 to configure and control the power management of various components of computer system 100. In one embodiment, EC 150 allows the operating system executing on CPU 102 to communicate with GPU 240 via driver 103 even when the communications path 113 is disabled. In one embodiment, the communication path 113 comprises a PCIe bus, which may be enabled during active operation of the GPU 240 or disabled to save power when the GPU 240 is in a power saving (sleep) mode. For example, if GPU 240 and the PCIe bus are shut down in a power saving mode, the operating system executing on CPU 102 may instruct EC 150 to wake-up GPU 240 by sending a notify ACPI event to EC 150 via driver 103.


The GPU 240 is coupled to a local memory system 242 via a memory interface bus 246. Data transfers via the memory interface bus 246 are enabled via memory clock enable signal CKE 248. The local memory system 242 comprises memory devices 244, such as dynamic random access memory (DRAM) devices.


Computer system 100 may include multiple display devices 110 such as an internal display panel 110(0) and one or more external display panels 110(1) to 110(N). Each of the one or more display devices 110 may be connected to GPU 240 via communication paths 280(0) to 280(N). In one embodiment, hot-plug detect (HPD) signals included in communication paths 280 are also connected to EC 150. When one or more display devices 110 are operating in a panel self-refresh mode, EC 150 may be responsible for monitoring HPD signals and waking-up GPU 240 if EC 150 detects a hot-plug event or an interrupt request from one of the display devices 110.


In one embodiment, a video generation lock (GEN_LCK) signal is included between internal display device 110(0) and GPU 240. The GEN_LCK signal transmits a synchronization signal from the display device 110(0) to GPU 240. The GEN_LCK signal may be used by certain synchronization functions implemented by display device 110(0). For example, GPU 240 may synchronize video signals generated from pixel data in memory devices 244 with the GEN_LCK signal. GEN_LCK may indicate the start of the active frame, for example, by transmitting an internal vertical sync signal to GPU 240.


EC 150 transmits a GPU power enable (GPU_PWR) and frame buffer power enable (FB_PWR) signals to voltage regulators (VR) 260 and 262, configured to provide a supply voltage to the GPU 240 and memory devices 244, respectively. EC 150 also transmits the WARMBOOT, self-refresh enable (SELF_REF) and RESET signals to GPU 240 and receives a GPUEVENT signal from GPU 240. Finally, EC 150 may communicate with GPU 240 via an industry standard “I2C” or “SMBus” data bus. The functionality of these signals is described below.


The GPU_PWR signal controls the voltage regulator 260 that provides GPU 240 with a supply voltage. When display device 110 enters a self-refresh mode, an operating system executing on CPU 102 may instruct EC 150 to kill power to GPU 240 by making a call to driver 103. The EC 150 will then drive the GPU_PWR signal low to kill power to GPU 240 to reduce the overall power consumption of computer system 100. Similarly, the FB_PWR signal controls the voltage regulator that provides memory devices 244 with a supply voltage. When display device 110 enters a self-refresh mode, computer system 100 may also kill power to memory devices 244 in order to further reduce overall power consumption of computer system 100. The FB_PWR signal is controlled in a similar manner to the GPU_PWR signal. The RESET signal may be asserted during wake-up of the GPU 240 to hold GPU 240 in a reset state while the voltage regulators that provide power to GPU 240 and memory devices 244 are allowed to stabilize.


The WARMBOOT signal is asserted by EC 150 to indicate that GPU 240 should restore an operating state from SPI flash device 256 instead of performing a full, cold-boot sequence. In one embodiment, when display device 110 enters a panel self-refresh mode, GPU 240 may be configured to save a current state in SPI flash device 256 before GPU 240 is powered down. GPU 240 may then restore an operating state by loading the saved state information from SPI flash device 256 upon waking-up. Loading the saved state information reduces the time required to wake-up GPU 240 relative to performing a full, cold-boot sequence. Reducing the time required to wake-up GPU 240 is advantageous during high frequency entry and exit into a panel self-refresh mode. In this scenario, power to the memory devices 244 may be held on to allow the memory devices 244 to operate in a low power self-refresh mode, thereby expediting a warm boot of the GPU 240.


The SELF_REF signal is asserted high (self-refresh is active) by EC 150 when display device 110 is operating in a panel self-refresh mode. The SELF_REF signal indicates to GPU 240 that display device 110 is currently operating in a panel self-refresh mode and that communications path 280 should be quiescent. In one embodiment, GPU 240 may connect one or more signals within the communications path 280 to ground through weak, pull-down resistors when the SELF_REF signal is asserted.


The GPUEVENT signal allows the GPU 240 to indicate to CPU 102 that an event has occurred, even when the PCIe bus is off. GPU 240 may assert the GPUEVENT to alert system EC 150 to configure the I2C/SMBUS to enable communication between the GPU 240 and the system EC 150. The I2C/SMBUS is a bidirectional communication bus configured as an I2C, SMBUS, or other bidirectional communication bus configured to enable GPU 240 and system EC 150 to communicate. In one embodiment, the PCIe bus may be shut down when display device 110 is operating in a panel self-refresh mode. The operating system may notify GPU 240 of events, such as cursor updates or a screen refresh, through system EC 150 even when the PCIe bus is shut down.



FIG. 3A is a detailed illustration of an external conditioning circuit 350, according to one embodiment of the present invention. The conditioning circuit 350 is configured to remove glitches from the memory clock enable signal CKE 248 of FIG. 2 during transitions between operating modes. Specifically, when the memory system 242 is in a self-refresh state, removing glitches from CKE 248 enables the memory system 242 to reliably operate in the self-refresh state. The operating modes may include, without limitation, power saving and normal modes of operation for the GPU 240 and memory system 242. The conditioning circuit 350 comprises field-effect transistor (FET) 310, resistors 312-316, FET 320, FET 330, resistors 322 and 334, and delay circuit 332. When signal pull enable 324 is asserted high, the conditioning circuit 350 pulls CKE 248 low and RSTM* 338 high (“*” indicates an active low signal). In one embodiment, resistor 312 is approximately two orders of magnitude lower in resistance than resistors 314 and 316, to provide a relatively strong pull down path through resistor 312 and FET 310. For example, resistor 312 may be one hundred ohms, while resistors 314 and 316 may be ten thousand ohms. Furthermore, resistor 322 may be approximately two orders of magnitude lower in resistance than resister 334. For example, resistor 322 may be one hundred ohms, while resistor 334 may be ten thousand ohms. When pull enable 324 is asserted high, FET 320 is turned on to pull RSTM* 338 high, which de-asserts reset to the memory system 242. With reset de-asserted, the memory system 242 may operate in self-refresh mode. When pull enable 324 is asserted high, FET 330 is also turned off, isolating RST* 336 from the RSTM* 338 signal. When pull enable 324 is asserted low, FETs 310 and 320 are turned off, disabling pull-down of CKE 248 and pull-up of RSTM* 338 by the conditioning circuit 350. When pull enable 324 is asserted low, FET 330 is also turned on, coupling RST* 336 to the RSTM* 338 signal, allowing the GPU 240 to control reset state of the memory system 242. Delay circuit 332 may be configured to delay when FET 330 is turned on and off with respect to FETS 310 and 320.


During active operation, the GPU 240 transmits command information and data to the memory devices 244, by clocking the command information and data via to the memory devices 244 when the memory clock enable signal CKE 248 is active. In one embodiment, the SELF_REF signal, which is generated by system EC 150 of FIG. 1, is coupled to and drives the pull enable 324 signal. When SELF_REF is asserted high (for power saving self refresh mode), CKE 248 is pulled low via FET 310 and resistor 312, and RSTM* 338 is pulled high to FBVDDQ 326 via FET 320 and resistor 322. When SELF_REF is asserted low, the GPU 240 is allowed to enter active operation. During active operation, voltage regulator 260 is enabled to generate GPU supply voltage 362, and voltage regulator 262 is enabled to generate memory supply voltage 364. In one embodiment, FBVDDQ 326 is coupled to memory supply voltage 364.


To enter a deep sleep mode, wherein the GPU 240 is powered off, the memory devices 244 are configured to enter self-refresh mode to preserve data within the memory devices 244. When the pull enable 324 signal is asserted high (active) by the system EC 150, memory devices 244 are configured to operate in a self-refresh mode, which can potentially be interrupted if a spurious enable signal is received on CKE 248. When GPU 240 is powered on or powered off, circuitry within the GPU 240 may generate a spurious signal, such as an active sense spike, on CKE 248 and disrupt proper self-refresh operation of the memory devices 244. To avoid generating a spurious signal on CKE 248, the conditioning circuit 350 shunts (clamps) CKE 248 to ground via resistor 312 and FET 310 when self-refresh is active and the pull enable 324 signal is high. Prior to powering off the GPU 240 or powering on the GPU 240, CKE 248 is shunted to ground, thereby removing spurious signals from CKE 248. The conditioning circuit 350 is powered by a power domain that is isolated from GPU supply voltage 362. For example, the conditioning circuit 350 may be powered by memory supply voltage 364.


During normal operation, the GPU 240 generates and stores state data 340 within the memory devices 244. When the GPU 240 is powered-off, the self-refresh mode of memory devices 244 is used to preserve the state data 340. In one embodiment, the state data 340 comprises stored program and data information related to operations performed by the GPU 240. In an alternative embodiment, the state data 340 also includes internal state information conventionally stored exclusively within the GPU 240 during normal operation. In such an embodiment, the internal state information is written to the memory devices 244 prior to the GPU 240 being shut down for deep sleep operation, and prior to the memory devices 244 being put in self-refresh mode. Alternatively, the internal state information may be written to SPI flash 256 prior to the GPU 240 being shut down for deep sleep operation.



FIG. 3B is a detailed illustration of an integrated memory clock deglitching circuit 350, according to another embodiment of the present invention. In this embodiment, the conditioning circuit 350 is integrated within the GPU 240. Importantly, however, the conditioning circuit 350 is configured to operate from a separate power domain, such as from memory supply voltage 364, received from memory power regulator 370. GPU 240 receives GPU supply voltage 362 from GPU power regulator 360. Techniques for fabricating on-chip circuitry having isolated power domains are known in the art, and any such techniques may be employed to implement the conditioning circuit 350 without departing the scope of the present invention.



FIG. 4A sets forth a flowchart of method steps 400 for causing the parallel processing system to enter a deep sleep state, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems of FIGS. 1-3B persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the invention.


The method begins in step 410, where the GPU 240 receives a command to transition to an idle state. In step 412, the GPU 240 halts processing. At this point, incoming commands may be stalled, any processing queues may be drained, and any pending requests completed or retired. After this step is complete, internal GPU state comprises primarily configuration and status information that may be readily saved. In step 414, the GPU 240 saves internal GPU state to a local memory. In one embodiment, the GPU 240 saves the internal GPU state to the memory devices 244. In an alternative embodiment, the GPU 240 saves the internal GPU state to a locally attached flash storage device, such as SPI flash 256. In step 416, the GPU 240 configures the memory devices 244 to enter self-refresh mode. At this point, the memory devices 244 are able to preserve stored data indefinitely while consuming relatively little power. In step 418, the GPU 240 enters a reset mode. While the GPU 240 enters the reset mode, the system EC 150 drives the SELF_REF signal active. In step 420, the GPU 240 powers down. The method terminates in step 420.



FIG. 4B sets forth a flow chart of method steps 402 for causing the parallel processing system to exit the deep sleep state, according to one embodiment of the present invention. Although the method steps are described in conjunction with the systems of FIGS. 1-3B persons skilled in the art will understand that any system configured to perform the method steps, in any order, is within the scope of the invention.


The method begins in step 440, where the GPU 240 enters a power-on state from a powered off state. If, in step 450, the GPU 240 is entering the power-on state from deep sleep, then the method proceeds to step 454. In one embodiment, the GPU 240 determines that the power-on state follows a deep sleep state by examining the SELF_REF signal. If the GPU 240 powered-on with the SELF_REF signal asserted high, then the GPU 240 is entering the power-on state from a deep sleep state. After the GPU 240 powers on, the system EC 150 de-asserts the SELF_REF signal. In step 454, the GPU 240 configures the memory devices 244 to exit self-refresh mode. In step 456, the GPU 240 reloads stored internal GPU state from the local memory. In one embodiment, the internal GPU state is stored in memory devices 244. In an alternative embodiment, the internal GPU state is stored in a local flash device, such as SPI flash 256. After reloading internal GPU state, the GPU 240 may resume normal operation. In step 460, the GPU 240 resumes normal operation by entering an operational state. The method terminates in step 460.


Returning to step 450, if the GPU 240 is not entering the power-on state from deep sleep, then the method proceeds to step 452. In step 452, the GPU 240 performs a conventional cold boot.


In sum, a technique is disclosed for a GPU to enter and exit a deep sleep mode. The GPU is able to efficiently enter a deep sleep mode by storing certain processing context within locally attached memory. The GPU is able to efficiently exit the deep sleep mode by having access to state information that is preserved while the GPU is powered off.


One advantage of the disclosed technique is that a GPU may efficiently enter and exit a deep sleep power saving mode by leveraging low power self-refresh modes available from locally attached memory. By contrast, prior art systems do not benefit from maintaining GPU context within local memory.


While the foregoing is directed to embodiments of the invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof. For example, aspects of the present invention may be implemented in hardware or software or in a combination of hardware and software. One embodiment of the invention may be implemented as a program product for use with a computer system. The program(s) of the program product define functions of the embodiments (including the methods described herein) and can be contained on a variety of computer-readable storage media. Illustrative computer-readable storage media include, but are not limited to: (i) non-writable storage media (e.g., read-only memory devices within a computer such as CD-ROM disks readable by a CD-ROM drive, flash memory, ROM chips or any type of solid-state non-volatile semiconductor memory) on which information is permanently stored; and (ii) writable storage media (e.g., floppy disks within a diskette drive or hard-disk drive or any type of solid-state random-access semiconductor memory) on which alterable information is stored. Such computer-readable storage media, when carrying computer-readable instructions that direct the functions of the present invention, are embodiments of the invention.


In view of the foregoing, the scope of the invention is determined by the claims that follow.

Claims
  • 1. A method implemented by a graphics processing unit (GPU) for entering and exiting sleep mode, the method comprising: receiving a command for the GPU to enter a sleep mode; andin response to receiving the command for the GPU to enter the sleep mode, performing the steps of: saving an internal processing state of the GPU to a memory system local to the GPU;causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to entering a power-down state;causing a first communications bus that couples the GPU to a central processing unit (CPU) to be disabled;causing at least one memory device included in the memory system to enter a self-refresh mode;transmitting a signal to an embedded controller to enable a second communications bus that couples the embedded controller to the GPU; andentering the power-down state.
  • 2. The method of claim 1, further comprising stalling all incoming workloads and completing all in-progress processing in order to halt processing within the GPU.
  • 3. The method of claim 1, wherein the at least one memory device comprises a dynamic random access memory (DRAM) device, and the step of saving comprises copying the internal processing state for the GPU to the DRAM device.
  • 4. The method of claim 1, wherein the at least one memory device comprises a non-volatile memory, and the step of saving comprises copying the internal processing state for the GPU and a memory interface state to the non-volatile memory.
  • 5. The method of claim 4, wherein the non-volatile memory comprises at least one flash memory device coupled to the GPU via an interface that is independent of an interface to a different memory device within the memory system.
  • 6. The method of claim 1, wherein the GPU and memory system are configured to operate at a predetermined speed and operating state prior to entering the power-down state.
  • 7. The method of claim 1, further comprising: entering a power-on state;determining that the power-on state is associated with the sleep mode;causing the at least one memory device included in the memory system to exit the self-refresh mode;reloading the internal processing state for the GPU from the memory system to the GPU.
  • 8. The method of claim 7, further comprising performing a power-on reset of the GPU and detecting a warm-boot state for transitioning the GPU from the power-down state to the power-on state.
  • 9. The method of claim 7, wherein the at least one memory device comprises a DRAM device, and the step of reloading comprises copying the internal processing state for the GPU from the DRAM device to the GPU.
  • 10. The method of claim 9, wherein the at least one memory device comprises a non-volatile memory device, and the step of reloading comprises copying the internal processing state for the GPU and a DRAM interface state from the non-volatile memory device to the GPU.
  • 11. The method of claim 10, wherein the non-volatile memory device comprises at least one flash memory device, coupled to the GPU via an interface that is independent of the DRAM device.
  • 12. The method of claim 7, wherein the GPU and the memory system are configured to operate at a predetermined speed and operating state prior to entering the power-on state.
  • 13. The method of claim 7, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to exiting the power-on state.
  • 14. The method of claim 10, wherein a portion of the internal processing state for the GPU and the DRAM interface state are restored to a previously configured state based on data stored within the non-volatile memory.
  • 15. The method of claim 1, wherein the CPU is configured to communicate with the GPU when the first communications bus is disabled.
  • 16. The method of claim 1, wherein the CPU is configured to notify the GPU of cursor updates and screen refreshes when the first communications bus is disabled.
  • 17. A non-transitory computer-readable storage medium including instructions that, when executed by a graphics processing unit (GPU), cause the GPU to enter and exit a sleep mode, by performing the steps of: receiving a command for the GPU to enter a sleep mode; andin response to receiving the command for the GPU to enter the sleep mode, performing the steps of: saving an internal processing state of the GPU to a memory system local to the GPU;causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to entering a power-down state;causing a first communications bus that couples the GPU to a central processing unit (CPU) to be disabled;causing at least one memory device included in the memory system to enter a self-refresh mode;transmitting a signal to an embedded controller to enable a second communications bus that couples the embedded controller to the GPU; andentering the power-down state.
  • 18. The non-transitory computer-readable storage medium of claim 17, further comprising stalling all incoming workloads and completing all in-progress processing in order to halt processing within the GPU.
  • 19. The non-transitory computer-readable storage medium of claim 17, wherein the at least one memory device comprises a dynamic random access memory (DRAM) device, and the step of saving comprises copying the internal processing state for the GPU to the DRAM device.
  • 20. The non-transitory computer-readable storage medium of claim 17, wherein the at least one memory device comprises a non-volatile memory, and the step of saving comprises copying the internal processing state for the GPU and a memory interface state to the non-volatile memory.
  • 21. The non-transitory computer-readable storage medium of claim 20, wherein the non-volatile memory comprises at least one flash memory device coupled to the GPU via an interface that is independent of an interface to a different memory device within the memory system.
  • 22. The non-transitory computer-readable storage medium of claim 17, wherein the GPU and memory system are configured to operate at a predetermined speed and operating state prior to entering the power-down state.
  • 23. The non-transitory computer-readable storage medium of claim 17, further comprising: entering a power-on state;determining that the power-on state is associated with the sleep mode;causing the at least one memory device included in the memory system to exit the self-refresh mode;reloading the internal processing state for the GPU from the memory system to the GPU.
  • 24. The non-transitory computer-readable storage medium of claim 23, further comprising performing a power-on reset of the GPU and detecting a warm-boot state for transitioning the GPU from the power-down state to the power-on state.
  • 25. The non-transitory computer-readable storage medium of claim 23, wherein the at least one memory device comprises a DRAM device, and the step of reloading comprises copying the internal processing state for the GPU from the DRAM device to the GPU.
  • 26. The non-transitory computer-readable storage medium of claim 25, wherein the at least one memory device comprises a non-volatile memory device, and the step of reloading comprises copying the internal processing state for the GPU and a DRAM interface state from the non-volatile memory device to the GPU.
  • 27. The non-transitory computer-readable storage medium of claim 26, wherein the non-volatile memory device comprises at least one flash memory device, coupled to the GPU via an interface that is independent of the DRAM device.
  • 28. The non-transitory computer-readable storage medium of claim 23, wherein the GPU and the memory system are configured to operate at a predetermined speed and operating state prior to entering the power-on state.
  • 29. The non-transitory computer-readable storage medium of claim 23, further comprising causing at least one enable signal used for controlling transmission of data between the GPU and the memory system to be clamped to a fixed voltage prior to exiting the power-on state.
  • 30. The non-transitory computer-readable storage medium of claim 26, wherein a portion of the internal processing state for the GPU and the DRAM interface state are restored to a previously configured state based on data stored within the non-volatile memory.
  • 31. A computing device, comprising: a memory system configured to operate in an active mode and a low power self-refresh mode;an isolation circuit coupled to the memory system and configured to cause at least one enable signal used for controlling transmission of data between a graphics processing unit (GPU) and the memory system to be clamped to a fixed voltage prior to entering a power-down state;the GPU coupled to the memory system and to the isolation circuit and configured to:receive a command for the GPU to enter a sleep mode; andin response to receiving the command for the GPU to enter the sleep mode, performing the steps of: save an internal processing state of the GPU to the memory system;cause a first communications bus that couples the GPU to a central processing unit (CPU) to be disabled;cause at least one memory device included in the memory system to enter a self-refresh mode;transmit a signal to an embedded controller to enable a second communications bus that couples the embedded controller to the GPU;enter the power-down state;enter a power-on state;determine that the power-on state is associated with a sleep mode;cause the first communications bus to be enabled;cause the at least one memory device included in the memory system to exit the self-refresh mode; andload the internal processing state for the GPU from the memory system to the GPU.
US Referenced Citations (249)
Number Name Date Kind
4016595 Compton Apr 1977 A
4089058 Murdock May 1978 A
4523192 Burton et al. Jun 1985 A
4709195 Hellekson et al. Nov 1987 A
4747041 Engel May 1988 A
5255094 Yong et al. Oct 1993 A
5424692 McDonald Jun 1995 A
5475329 Jones et al. Dec 1995 A
5481731 Conary et al. Jan 1996 A
5558071 Ward et al. Sep 1996 A
5596765 Baum Jan 1997 A
5614847 Kawahara et al. Mar 1997 A
5623677 Townsley et al. Apr 1997 A
5625807 Lee Apr 1997 A
5628019 O'Brien May 1997 A
5630145 Chen May 1997 A
5655112 MacInnis Aug 1997 A
5727221 Walsh Mar 1998 A
5729164 Pattantyus Mar 1998 A
5758166 Ajanovic May 1998 A
5758170 Woodward et al. May 1998 A
5761727 Wu Jun 1998 A
5842029 Conary et al. Nov 1998 A
5844786 Yoshida et al. Dec 1998 A
5867717 Milhaupt Feb 1999 A
5896140 O'Sullivan Apr 1999 A
5914545 Pollersbeck Jun 1999 A
5935253 Conary et al. Aug 1999 A
5974558 Cortopassi et al. Oct 1999 A
5991883 Atkinson Nov 1999 A
6040845 Melo Mar 2000 A
6148356 Archer Nov 2000 A
6199134 Deschepper Mar 2001 B1
6292859 Santiago Sep 2001 B1
6418504 Conway Jul 2002 B2
6433785 Garcia Aug 2002 B1
6438698 Hellum Aug 2002 B1
6466003 Gallavan et al. Oct 2002 B1
6539486 Rolls et al. Mar 2003 B1
6557065 Peleg Apr 2003 B1
6581115 Arimilli Jun 2003 B1
6604161 Miller Aug 2003 B1
6622178 Burke Sep 2003 B1
6691236 Atkinson Feb 2004 B1
6711691 Howard Mar 2004 B1
6717451 Klein et al. Apr 2004 B1
6785829 George et al. Aug 2004 B1
6807629 Billick Oct 2004 B1
6895456 Olarig May 2005 B2
6963340 Alben et al. Nov 2005 B1
6985152 Rubinstein Jan 2006 B2
7054964 Chan May 2006 B2
7058829 Hamilton Jun 2006 B2
7136953 Bisson Nov 2006 B1
7426597 Tsu Sep 2008 B1
7456833 Diard Nov 2008 B1
7469311 Tsu Dec 2008 B1
7548481 Dewey et al. Jun 2009 B1
7598959 Kardach Oct 2009 B2
7624221 Case Nov 2009 B1
7698489 Jacoby Apr 2010 B1
7800621 Fry Sep 2010 B2
7802118 Abdalla et al. Sep 2010 B1
7827424 Bounitch Nov 2010 B2
7925907 Reed Apr 2011 B1
7958483 Alben Jun 2011 B1
8040351 Diard Oct 2011 B1
8095813 Pernia et al. Jan 2012 B2
8255708 Zhang Aug 2012 B1
8325184 Jiao Dec 2012 B2
8352644 Malamant Jan 2013 B2
8392695 Lachwani et al. Mar 2013 B1
8621253 Brown Dec 2013 B1
8635480 Mimberg Jan 2014 B1
8717371 Wyatt May 2014 B1
8762748 Zhang Jun 2014 B1
8928907 Min Jan 2015 B2
8941672 Jacoby Jan 2015 B1
8943347 Khodorkovsky Jan 2015 B2
9047085 Wyatt Jun 2015 B2
9098259 Lachwani Aug 2015 B1
9342131 Saito May 2016 B2
9619005 No Apr 2017 B2
9625976 Zhang Apr 2017 B1
10365706 Bakshi Jul 2019 B2
20020060640 Davis et al. May 2002 A1
20020062416 Kim May 2002 A1
20020099980 Olarig Jul 2002 A1
20020109688 Olarig Aug 2002 A1
20020194548 Tetreault Dec 2002 A1
20030097495 Hansen May 2003 A1
20030182487 Dennis Sep 2003 A1
20030229403 Nakazawa et al. Dec 2003 A1
20030233588 Verdun Dec 2003 A1
20040024941 Olarig et al. Feb 2004 A1
20040138833 Flynn Jul 2004 A1
20040143776 Cox Jul 2004 A1
20040155636 Fukui et al. Aug 2004 A1
20040250035 Atkinson Dec 2004 A1
20050030311 Hara Feb 2005 A1
20050060468 Emerson Mar 2005 A1
20050133932 Pohl et al. Jun 2005 A1
20050148358 Lin Jul 2005 A1
20050151082 Coffin et al. Jul 2005 A1
20050231454 Alben et al. Oct 2005 A1
20050259106 Rai Nov 2005 A1
20050289377 Luong Dec 2005 A1
20060020835 Samson Jan 2006 A1
20060054610 Morimoto et al. Mar 2006 A1
20060089819 Dubal Apr 2006 A1
20060106911 Chapple May 2006 A1
20060112287 Paljug May 2006 A1
20060145328 Hsu Jul 2006 A1
20060170098 Hsu Aug 2006 A1
20060212733 Hamilton Sep 2006 A1
20060236027 Jain et al. Oct 2006 A1
20060245287 Seitz et al. Nov 2006 A1
20060259801 Chu Nov 2006 A1
20060259804 Fry Nov 2006 A1
20060287805 Enomoto et al. Dec 2006 A1
20070005995 Kardach Jan 2007 A1
20070094486 Moore et al. Apr 2007 A1
20070143640 Simeral Jun 2007 A1
20070201173 Chu et al. Aug 2007 A1
20070212103 Kikuchi Sep 2007 A1
20070296613 Hussain Dec 2007 A1
20070297501 Hussain Dec 2007 A1
20070300207 Booth et al. Dec 2007 A1
20080048755 Cho et al. Feb 2008 A1
20080117222 Leroy May 2008 A1
20080143731 Cheng Jun 2008 A1
20080168201 de Cesare Jul 2008 A1
20080168285 de Cesare Jul 2008 A1
20080180564 Yamaji Jul 2008 A1
20080263315 Zhang Oct 2008 A1
20080309355 Nozaki et al. Dec 2008 A1
20090041380 Watanabe Feb 2009 A1
20090073168 Jiao Mar 2009 A1
20090077307 Kaburlasos et al. Mar 2009 A1
20090079746 Howard et al. Mar 2009 A1
20090096797 Du et al. Apr 2009 A1
20090100279 Lee Apr 2009 A1
20090153211 Hendin et al. Jun 2009 A1
20090153540 Blinzer et al. Jun 2009 A1
20090193234 Ors et al. Jul 2009 A1
20090204766 Jacobi et al. Aug 2009 A1
20090204831 Cousson Aug 2009 A1
20090204834 Hendin et al. Aug 2009 A1
20090204835 Smith et al. Aug 2009 A1
20090204837 Raval et al. Aug 2009 A1
20090207147 Perrot Aug 2009 A1
20090240892 Moyer Sep 2009 A1
20090259982 Verbeure Oct 2009 A1
20100007646 Tsuei Jan 2010 A1
20100031071 Lu Feb 2010 A1
20100058089 Lerman Mar 2010 A1
20100064125 Liu et al. Mar 2010 A1
20100088453 Solki Apr 2010 A1
20100123725 Azar May 2010 A1
20100127407 LeBlanc et al. May 2010 A1
20100148316 Kim et al. Jun 2010 A1
20100157713 Furutani Jun 2010 A1
20100177070 Zhu et al. Jul 2010 A1
20100201340 Raghavan Aug 2010 A1
20100220102 Wyatt et al. Sep 2010 A1
20100281185 Takayama Nov 2010 A1
20100309704 Dattaguru et al. Dec 2010 A1
20110029694 Blinzer Feb 2011 A1
20110047326 Kaburlasos et al. Feb 2011 A1
20110053649 Wilson Mar 2011 A1
20110057936 Gotwalt Mar 2011 A1
20110060928 Khodorkovsky Mar 2011 A1
20110109371 Kastl May 2011 A1
20110143809 Salomone Jun 2011 A1
20110148890 Kaburlasos Jun 2011 A1
20110157191 Huang Jun 2011 A1
20110169840 Bakalash Jul 2011 A1
20110173476 Reed Jul 2011 A1
20110185208 Iwamoto Jul 2011 A1
20110215836 Shimizu Sep 2011 A1
20110216780 Zhu Sep 2011 A1
20110221417 Ishidoh et al. Sep 2011 A1
20110221757 Hsieh Sep 2011 A1
20110252200 Hendry Oct 2011 A1
20110264902 Hollingworth et al. Oct 2011 A1
20110285208 Xiao Nov 2011 A1
20120008431 Lee Jan 2012 A1
20120079302 Ise Mar 2012 A1
20120133659 Masnikosa May 2012 A1
20120151264 Balkan et al. Jun 2012 A1
20120204043 Hamasaki Aug 2012 A1
20120206461 Wyatt Aug 2012 A1
20120207208 Wyatt Aug 2012 A1
20120236013 Wyatt Sep 2012 A1
20120242671 Wyatt Sep 2012 A1
20120242672 Larson Sep 2012 A1
20120248875 Fang Oct 2012 A1
20120249559 Khodorkovsky Oct 2012 A1
20120249563 Wyatt Oct 2012 A1
20120286881 Mahooti et al. Nov 2012 A1
20120317607 Wyatt Dec 2012 A1
20130002596 Ke Jan 2013 A1
20130007492 Sokol, Jr. Jan 2013 A1
20130021352 Wyatt Jan 2013 A1
20130027413 Jayavant Jan 2013 A1
20130038615 Hendry Feb 2013 A1
20130063607 Shimotono Mar 2013 A1
20130069964 Wuu Mar 2013 A1
20130083047 Shamarao Apr 2013 A1
20130111092 Heller May 2013 A1
20130111242 Heller May 2013 A1
20130151881 Chen Jun 2013 A1
20130159741 Schluessler Jun 2013 A1
20130159750 Branover Jun 2013 A1
20130194286 Bourd Aug 2013 A1
20130198548 No Aug 2013 A1
20130235053 Bourd Sep 2013 A1
20130265307 Goel Oct 2013 A1
20130318278 Wu Nov 2013 A1
20130332764 Juang Dec 2013 A1
20130346640 Gui Dec 2013 A1
20140092103 Saulters Apr 2014 A1
20140092109 Saulters Apr 2014 A1
20140181806 Abiezzi Jun 2014 A1
20140181807 Fonseca Jun 2014 A1
20140184629 Wyatt Jul 2014 A1
20140232731 Holland Aug 2014 A1
20140258738 Greenwalt Sep 2014 A1
20140344599 Branover Nov 2014 A1
20140380028 Cheng Dec 2014 A1
20150153818 Jeon Jun 2015 A1
20150193062 Wyatt Jul 2015 A1
20150194137 Wyatt Jul 2015 A1
20150379670 Koker Dec 2015 A1
20160150130 Noro May 2016 A1
20160209900 Blayvas Jul 2016 A1
20160292812 Wu Oct 2016 A1
20160370844 Kumar Dec 2016 A1
20160378709 Menachem Dec 2016 A1
20170061568 Metz Mar 2017 A1
20170154005 Ahmed Jun 2017 A1
20170199542 Sylvester Jul 2017 A1
20170277643 Zhou Sep 2017 A1
20180232034 DiBene, II Aug 2018 A1
20180293205 Koker Oct 2018 A1
20190056955 Pennala Feb 2019 A1
20190235615 Shows Aug 2019 A1
20190303322 Sharma Oct 2019 A1
20200135151 Jiang Apr 2020 A1
Foreign Referenced Citations (5)
Number Date Country
1229710 Nov 2005 CN
1983329 Jun 2007 CN
1 951 015 Jul 2008 EP
200923784 Jun 2009 TW
2008067258 Jun 2008 WO
Non-Patent Literature Citations (1)
Entry
Search and Examination Report from the UK Intellectual Property Office dated Oct. 31, 2012.
Related Publications (1)
Number Date Country
20130027413 A1 Jan 2013 US