The present invention is generally related to scheduling of clusters of units of shader work in a graphics processing system. More particularly, the present invention is related to scheduling clusters of warps or wavefronts to individual segments of a vector register file.
Graphics processing systems are sometimes configured as Single Instruction Multiple Thread (SIMT) machines that have multiple threads that execute the same function. In particular a group, in blocks of threads is assigned to the same processor. A block may further be divided into units of thread scheduling (e.g., units of work). A warp is a group of parallel threads that executes a single instruction from the same instruction stream. An individual warp may, for example, have 32 threads. A warp is also a unit of thread scheduling. A warp also has associated shared resources that are allocated to a warp, include an area in register file.
A warp is an organizational technique appropriate to SIMT style computations where a multiplicity of threads executes the same instruction from the same instruction stream. The warp concept allows the management of these threads to be simplified and streamlined. The warp concept manages sharing of resources over a number of threads and may include:
Conventionally the scheduling of the warps and the associated memory allocation in a register file is generally optimized for performance and/or memory usage. The warps are typically loaded in the register file in a random order between memory boundaries of the physically separate units of Static Random Access Memory (SRAM) in order to optimize performance. However, the random order of the scheduling optimizes performance but requires the SRAM to remain in an active state that consumes significant power.
While warp is a common term for a unit of thread scheduling promoted by the Nvidia Corporation a similar unit of thread scheduling is known as a wavefront or a wave, where the AMD Corporation has promoted the wavefront as a unit of thread scheduling having 64 threads. The problems of scheduling waves are essentially the same as for warps, aside from trivial differences in implementation. In any case the precise number of threads in a warp or a wave is somewhat arbitrary and is subject to possible further revision as the industry evolves.
In a graphics processing system with a programmable shader, units of thread scheduling correspond to units of shader work, with warps and wavefronts being examples of units of shader work. Clusters of units of shade work are formed. The scheduling of the clusters is selected so that a cluster is allocated a segment of a vector register file. Additional sequencing may be performed for a cluster to reach a synchronization point. Individual register file segments are placed into a reduced power data retention mode during a latency period when the cluster associated with a segment is waiting for execution of a long latency operation request, such as a texture sample or memory load store request.
In one embodiment a method of operating a shader in a graphics processing system includes allocating a segment of a vector register file as a resource for a cluster of shader units of work assigned to a processor and having temporal locality. Additionally, there may also be spatial locality in terms of pixel shader processing, so that the texture cache efficiency is further improved. In response to the cluster being in an inactive state the segment of the vector register file associated with the cluster is placed in a reduced power data retention mode.
In one embodiment a method of operating a shader in a graphics processing system includes scheduling clusters of shader work for a plurality of processors, each cluster including a plurality of shader units of work assigned to a processor and having temporal locality. An allocation is made for each cluster to allocate a respective segment of physical memory of a vector register file as a resource, each segment having an active mode and a reduced power data retention mode independently selectable from other segments. The execution of the cluster is rotated to place segments of inactive clusters into the reduced power data retention mode.
In one embodiment a shader includes a programmable processing element. A vector register file is used as a resource for units of shader work in which each unit of shader work has a group of shader threads to perform Single Instruction Multiple Thread (SIMT) processing and multiple groups of shader threads are formed into a cluster, the vector register file allocated as a plurality of individual segments. A scheduler groups clusters of units of shader work and selects a schedule to assign an individual cluster to a segment of the register file and place the segment into a reduced power data retention mode during a latency period when the cluster is waiting for a result of a texture sample or memory load store operation.
In one embodiment, a shader pipeline controller 150 includes a set of fixed function units 155 to support fixed graphics functions core. Examples of fixed functions include an Input Assembler, Vertex Shader Constructor, Hull Shader Constructor, fixed-function Tessellator, Domain Shader Constructor, Geometry Shader Constructor, Stream Out, Pixel Shader Constructor and Compute Constructor.
In one embodiment a processing slice includes 8 PEs organized into two quads. In one embodiment the PEs support Single Instruction Multiple Thread (SIMT) operation.
In one embodiment each vector register file is allocated into segments, where each segment is capable of being placed into a reduced power data retention mode independently of the other segment(s) of the vector register file. In one embodiment the vector register file is operated as four segments, although different numbers of segments may be used depending on implementation. Each segment, may for example, be physically separate units of SRAM memory.
The programmable shader core 100 includes a scheduler 160 to schedule clusters of work for all of the groups of processing elements. Additionally, each group of processing elements includes a sequencer (SQ) 118.
The programmable shader core 100 includes programmable operations implemented by the processing elements 110 that can have a significant latency, such as accesses to an external memory. For example, a texture fetch requiring access to external memory may have a latency of several hundred cycles.
In one embodiment a sequencer 118 is provided to aid in managing the clusters of units of shader work received by a group of processing elements. Among other tasks the sequencer 118 aids in reaching a synchronization point for performing a step having a high latency, such as sending a texture sample or memory load store request for execution. In one embodiment a cluster of work is scheduled and sequenced so that a cluster of work is assigned to a segment of the vector register file associated with a processing element and the segment is configured to go into a low power mode as soon as practical after the cluster of work for that segment has been sent for execution, such a via a texture sample request. It will be understood that the division of work between the sequencer 118 and scheduler 160 may be varied to achieve an equivalent overall functionality of scheduling clusters, managing clusters, and performing optimization to reach a synchronization point.
Additional implementation details will now be described in accordance with an embodiment of the present invention described for a warp implementation. In one embodiment each thread of a warp is allocated a register file where it can perform calculations, perform flow control activities, and reference memory. In one embodiment all threads in a warp share a scalar register file. This is a register file where values are held that are common across every member of the warp. This scalar register file can be used to compute a value once, in one lane, and used across the entire warp as operands to thread instructions.
In one embodiment, the warp that can manage up to 32 threads organized as 8 threads in the spatial dimension and 4 threads in the temporal dimension. In one embodiment the 8 wide nature of the execution is processed by 8 lanes of computations of eight PEs. The 4 threads in the temporal dimension are managed as 4 cycles of execution in the computation unit pipeline.
In one embodiment the shader pipeline controller includes a Cluster Scheduler and each group of processing elements includes a Sequencer (SQ). In one embodiment for the case of warps, the scheduler in the shader pipeline controller is a warp scheduler configured so that at least two warps (e.g., 2 to 8 warps) in the same PE are assigned to consecutive shader tasks from the same shader pipeline stage, for example, four SIMT32 warps in PE0 are assigned to the consecutive Pixel Shader tasks for 128 pixels are part of a bundled group of warps called warp cluster. Assigning warps of the same PE to consecutive shader tasks of the same shader stage results in high temporal coherence.
In one embodiment the scheduler interfaces with shader pipeline controller to allocate units of shader work, attach register files, and initialize thread registers and scalar registers for the unit of shader work being initialized. When a shader unit of work completes, the resources of that unit of work are deallocated and made available to future allocations.
In one embodiment each group of PEs includes a cluster sequencer operating as a central control block in a PE quad that handles the sequencing and management of clusters of shader units of work. In one embodiment the sequencer contains an instruction buffer, a constant scratch register file, and the pre-decoding stage of the instruction pipeline. The cluster sequencer interfaces with the cluster scheduler in the shader pipeline controller unit to allocate shader units of work, register file and other shader related resources to the unit of work, and enable the shader constructors to deliver values to shader units of work.
In one embodiment the sequencer manages each warp in a quad through execution. When all initialization has been performed, the warp is a candidate for execution. The cluster scheduler selects between candidates and chooses which warp will enter execution next. When the currently running warp completes the last instruction of a trace, the scheduled warp enters execution of its trace, while the sequencer monitors the completion status of outstanding requests. Once all outstanding requests are satisfied this warp can again become a candidate and be selected to execute its subsequent next trace.
In one embodiment a trace is used to aid in management of warps. A trace is a sequence of instructions that once started will proceed to completion. A trace is denoted by a trace header instruction which includes the resource requirements of all instructions up to the subsequent trace header. The resource requirement list contains the number and kind of resource requests needed to satisfy the instructions in the trace. So, for example, it will contain the number of memory reference instructions (if any) so that the appropriate sized memory address buffer can be allocated; it will contain the number of texture coordinate addresses required (if any), and will contain the number of results emitted from this trace. Once a warp of threads starts processing the instructions of a trace, they proceed to the end of the trace without stalling. Each instruction in a trace is executed for each execution mask enabled member of the warp until the end of the trace is encountered. At such time, the warp scheduler will have chosen the subsequent warp and trace, and the processor begins execution of this new trace with the threads of the chosen warp.
In one embodiment the scheduler will keep assigning warps from the same PE to a shader stage and allocating registers from the same segment of vector register file until the “segment” is full. All the warps assigned share the same segment of vector register file and form a cluster. The cluster will execute the same shader program on the same PE.
By grouping multiple warps into a warp cluster the warps are executed with temporal locality and thus they share the same instruction traces in the instruction buffer to save the instruction fetch and possibly the instruction decoding. A warp cluster may execute traces out of order based on the resource availability to maximize the PE data path utilization. In one embodiment these warps only sync when the texture SAMPLE commands are processed in a texture unit, and these texture SAMPLE requests are handled strictly in order. In one embodiment the sequencer sequences the traces to prioritize the warps within the same warp cluster so that they can reach the sync point as soon as possible.
When the last SAMPLE request of a warp cluster is sent to the texture unit, the entire cluster goes into a sleep mode and the vector register file segment switches to lower power data retention mode.
While the invention has been described in conjunction with specific embodiments, it will be understood that it is not intended to limit the invention to the described embodiments. On the contrary, it is intended to cover alternatives, modifications, and equivalents as may be included within the spirit and scope of the invention as defined by the appended claims. The present invention may be practiced without some or all of these specific details. In addition, well known features may not have been described in detail to avoid unnecessarily obscuring the invention In accordance with the present invention, the components, process steps, and/or data structures may be implemented using various types of operating systems, programming languages, computing platforms, computer programs, and/or general purpose machines. In addition, those of ordinary skill in the art will recognize that devices of a less general purpose nature, such as hardwired devices, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), or the like, may also be used without departing from the scope and spirit of the inventive concepts disclosed herein. The present invention may also be tangibly embodied as a set of computer instructions stored on a computer readable medium, such as a memory device.
Number | Name | Date | Kind |
---|---|---|---|
7287148 | Kanapathippillai et al. | Oct 2007 | B2 |
7466318 | Kilgard | Dec 2008 | B1 |
7747842 | Goudy | Jun 2010 | B1 |
7928990 | Jiao et al. | Apr 2011 | B2 |
8421794 | Du et al. | Apr 2013 | B2 |
20040012600 | Deering et al. | Jan 2004 | A1 |
20050035965 | Sloan et al. | Feb 2005 | A1 |
20060093222 | Saffer et al. | May 2006 | A1 |
20060294520 | Anderson | Dec 2006 | A1 |
20070050493 | Sienel et al. | Mar 2007 | A1 |
20070091088 | Jiao et al. | Apr 2007 | A1 |
20070113060 | Lien et al. | May 2007 | A1 |
20070174514 | Lee | Jul 2007 | A1 |
20080001956 | Markovic et al. | Jan 2008 | A1 |
20080270752 | Rhine | Oct 2008 | A1 |
20090010336 | Au et al. | Jan 2009 | A1 |
20090027407 | Bourd | Jan 2009 | A1 |
20090109219 | DeCoro et al. | Apr 2009 | A1 |
20090172424 | Cai | Jul 2009 | A1 |
20090217252 | Aronson et al. | Aug 2009 | A1 |
20100007662 | Cox et al. | Jan 2010 | A1 |
20100123717 | Jiao | May 2010 | A1 |
20100131785 | Blackburn | May 2010 | A1 |
20100281489 | Lee | Nov 2010 | A1 |
20100293401 | de Cesare | Nov 2010 | A1 |
20100295852 | Yang | Nov 2010 | A1 |
20110227920 | Adams et al. | Sep 2011 | A1 |
20110249744 | Bailey | Oct 2011 | A1 |
20110285711 | Kilgard | Nov 2011 | A1 |
20120223947 | Nystad et al. | Sep 2012 | A1 |
20120242672 | Larson | Sep 2012 | A1 |
20130044117 | Mejdrich et al. | Feb 2013 | A1 |
20130047166 | Penzes | Feb 2013 | A1 |
20130073884 | Ulmer | Mar 2013 | A1 |
20130265308 | Goel et al. | Oct 2013 | A1 |
20130265309 | Goel et al. | Oct 2013 | A1 |
20140063013 | Goel et al. | Mar 2014 | A1 |
20140063014 | Goel et al. | Mar 2014 | A1 |
20140118366 | Hakura | May 2014 | A1 |
20140173619 | Nakashima | Jun 2014 | A1 |
20140181479 | Sasanka | Jun 2014 | A1 |
20160321182 | Grubisic | Nov 2016 | A1 |
Entry |
---|
Kaseridis et al., “Minimalist Open-page: A DRAM Page-mode Scheduling Policy for the Many-core Era”, MICRO 44, Dec. 3-7, 2011. |
Number | Date | Country | |
---|---|---|---|
20160349832 A1 | Dec 2016 | US |