Embodiments described herein generally relate to processors. In particular, embodiments described herein generally relate to Single Instruction, Multiple Thread (SIMT) Processors.
Graphics Processing Units (GPUs) and other single instruction, multiple thread (SIMT) processors are commonly used for graphics processing as well as general-purpose computing. In GPUs and other SIMT processors a SIMT instruction is typically run or executed on all configured threads or at least on all initialized threads.
The invention may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments. In the drawings:
Disclosed herein are embodiments of instructions, embodiments of graphics processing units (GPUs) or other SIMT processors to perform the instructions, embodiments of methods performed by the GPUs or other SIMT processors when performing the instructions, embodiments of systems incorporating one or more GPUs or other SIMT processors to perform the instructions, and embodiments of machine-readable mediums storing or otherwise providing the instructions. In the following description, numerous specific details are set forth (e.g., specific instruction operations, instruction formats, sequences of operations, GPU designs, etc.). However, embodiments may be practiced without these specific details. In other instances, well-known circuits, structures and techniques have not been shown in detail to avoid obscuring the understanding of the description.
In one embodiment, processing system 100 can include, couple with, or be integrated within: a server-based gaming platform; a game console, including a game and media console; a mobile gaming console, a handheld game console, or an online game console. In some embodiments the processing system 100 is part of a mobile phone, smart phone, tablet computing device or mobile Internet-connected device such as a laptop with low internal storage capacity. Processing system 100 can also include, couple with, or be integrated within: a wearable device, such as a smart watch wearable device; smart eyewear or clothing enhanced with augmented reality (AR) or virtual reality (VR) features to provide visual, audio or tactile outputs to supplement real world visual, audio or tactile experiences or otherwise provide text, audio, graphics, video, holographic images or video, or tactile feedback; other augmented reality (AR) device; or other virtual reality (VR) device. In some embodiments, the processing system 100 includes or is part of a television or set top box device. In one embodiment, processing system 100 can include, couple with, or be integrated within a self-driving vehicle such as a bus, tractor trailer, car, motor or electric power cycle, plane, or glider (or any combination thereof). The self-driving vehicle may use processing system 100 to process the environment sensed around the vehicle.
In some embodiments, the one or more processors 102 each include one or more processor cores 107 to process instructions which, when executed, perform operations for system or user software. In some embodiments, at least one of the one or more processor cores 107 is configured to process a specific instruction set 109. In some embodiments, instruction set 109 may facilitate Complex Instruction Set Computing (CISC), Reduced Instruction Set Computing (RISC), or computing via a Very Long Instruction Word (VLIW). One or more processor cores 107 may process a different instruction set 109, which may include instructions to facilitate the emulation of other instruction sets. Processor core 107 may also include other processing devices, such as a Digital Signal Processor (DSP).
In some embodiments, the processor 102 includes cache memory 104. Depending on the architecture, the processor 102 can have a single internal cache or multiple levels of internal cache. In some embodiments, the cache memory is shared among various components of the processor 102. In some embodiments, the processor 102 also uses an external cache (e.g., a Level-3 (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 107 using known cache coherency techniques. A register file 106 can be additionally included in processor 102 and may include different types of registers for storing different types of data (e.g., integer registers, floating point registers, status registers, and an instruction pointer register). Some registers may be general-purpose registers, while other registers may be specific to the design of the processor 102.
In some embodiments, one or more processor(s) 102 are coupled with one or more interface bus(es) 110 to transmit communication signals such as address, data, or control signals between processor 102 and other components in the processing system 100. The interface bus 110, in one embodiment, can be a processor bus, such as a version of the Direct Media Interface (DMI) bus. However, processor busses are not limited to the DMI bus, and may include one or more Peripheral Component Interconnect buses (e.g., PCI, PCI express), memory busses, or other types of interface busses. In one embodiment the processor(s) 102 include a memory controller 116 and a platform controller hub 130. The memory controller 116 facilitates communication between a memory device and other components of the processing system 100, while the platform controller hub (PCH) 130 provides connections to I/O devices via a local I/O bus.
The memory device 120 can be a dynamic random-access memory (DRAM) device, a static random-access memory (SRAM) device, flash memory device, phase-change memory device, or some other memory device having suitable performance to serve as process memory. In one embodiment the memory device 120 can operate as system memory for the processing system 100, to store data 122 and instructions 121 for use when the one or more processors 102 executes an application or process. The memory controller 116 also couples with an optional external graphics processor 118, which may communicate with the one or more graphics processors 108 in processors 102 to perform graphics and media operations. In some embodiments, graphics, media, and or compute operations may be assisted by an accelerator 112 which is a coprocessor that can be configured to perform a specialized set of graphics, media, or compute operations. For example, in one embodiment the accelerator 112 is a matrix multiplication accelerator used to optimize machine learning or compute operations. In one embodiment the accelerator 112 is a ray-tracing accelerator that can be used to perform ray-tracing operations in concert with the graphics processor 108. In one embodiment, an external accelerator 119 may be used in place of or in concert with the accelerator 112.
In some embodiments a display device 111 can connect to the processor(s) 102. The display device 111 can be one or more of an internal display device, as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., DisplayPort, etc.). In one embodiment the display device 111 can be a head mounted display (HMD) such as a stereoscopic display device for use in virtual reality (VR) applications or augmented reality (AR) applications.
In some embodiments the platform controller hub 130 enables peripherals to connect to memory device 120 and processor 102 via a high-speed I/O bus. The I/O peripherals include, but are not limited to, an audio controller 146, a network controller 134, a firmware interface 128, a wireless transceiver 126, touch sensors 125, a data storage device 124 (e.g., non-volatile memory, volatile memory, hard disk drive, flash memory, NAND, 3D NAND, 3D XPoint, etc.). The data storage device 124 can connect via a storage interface (e.g., SATA) or via a peripheral bus, such as a Peripheral Component Interconnect bus (e.g., PCI, PCI express). The touch sensors 125 can include touch screen sensors, pressure sensors, or fingerprint sensors. The wireless transceiver 126 can be a Wi-Fi transceiver, a Bluetooth transceiver, or a mobile network transceiver such as a 3G, 4G, 5G, or Long-Term Evolution (LTE) transceiver. The firmware interface 128 enables communication with system firmware, and can be, for example, a unified extensible firmware interface (UEFI). The network controller 134 can enable a network connection to a wired network. In some embodiments, a high-performance network controller (not shown) couples with the interface bus 110. The audio controller 146, in one embodiment, is a multi-channel high-definition audio controller. In one embodiment the processing system 100 includes an optional legacy I/O controller 140 for coupling legacy (e.g., Personal System 2 (PS/2)) devices to the system. The platform controller hub 130 can also connect to one or more Universal Serial Bus (USB) controllers 142 to connect to input devices, such as keyboard and mouse 143 combinations, a camera 144, or other USB input devices.
It will be appreciated that the processing system 100 shown is exemplary and not limiting, as other types of data processing systems that are differently configured may also be used. For example, an instance of the memory controller 116 and platform controller hub 130 may be integrated into a discrete external graphics processor, such as the external graphics processor 118. In one embodiment the platform controller hub 130 and/or memory controller 116 may be external to the one or more processor(s) 102 and reside in a system chipset that is in communication with the processor(s) 102.
For example, circuit boards (“sleds”) can be used on which components such as CPUs, memory, and other components are placed, and are designed for increased thermal performance. In some examples, processing components such as the processors are located on a top side of a sled while near memory, such as DIMMs, are located on a bottom side of the sled. As a result of the enhanced airflow provided by this design, the components may operate at higher frequencies and power levels than in typical systems, thereby increasing performance. Furthermore, the sleds are configured to blindly mate with power and data communication cables in a rack, thereby enhancing their ability to be quickly removed, upgraded, reinstalled, and/or replaced. Similarly, individual components located on the sleds, such as processors, accelerators, memory, and data storage drives, are configured to be easily upgraded due to their increased spacing from each other. In the illustrative embodiment, the components additionally include hardware attestation features to prove their authenticity.
A data center can utilize a single network architecture (“fabric”) that supports multiple other network architectures including Ethernet and Omni-Path. The sleds can be coupled to switches via optical fibers, which provide higher bandwidth and lower latency than typical twisted pair cabling (e.g., Category 5, Category 5e, Category 6, etc.). Due to the high bandwidth, low latency interconnections and network architecture, the data center may, in use, pool resources, such as memory, accelerators (e.g., GPUs, graphics accelerators, FPGAs, ASICs, neural network and/or artificial intelligence accelerators, etc.), and data storage drives that are physically disaggregated, and provide them to compute resources (e.g., processors) on an as needed basis, enabling the compute resources to access the pooled resources as if they were local.
A power supply or source can provide voltage and/or current to processing system 100 or any component or system described herein. In one example, the power supply includes an AC to DC (alternating current to direct current) adapter to plug into a wall outlet. Such AC power can be renewable energy (e.g., solar power) power source. In one example, power source includes a DC power source, such as an external AC to DC converter. In one example, power source or power supply includes wireless charging hardware to charge via proximity to a charging field. In one example, power source can include an internal battery, alternating current supply, motion-based power supply, solar power supply, or fuel cell source.
In some embodiments, processor 200 may also include a set of one or more bus controller units 216 and a system agent core 210. The one or more bus controller units 216 manage a set of peripheral buses, such as one or more PCI or PCI express busses. System agent core 210 provides management functionality for the various processor components. In some embodiments, system agent core 210 includes one or more integrated memory controllers 214 to manage access to various external memory devices (not shown).
In some embodiments, one or more of the processor cores 202A-202N include support for simultaneous multi-threading. In such embodiment, the system agent core 210 includes components for coordinating and operating cores 202A-202N during multi-threaded processing. System agent core 210 may additionally include a power control unit (PCU), which includes logic and components to regulate the power state of processor cores 202A-202N and graphics processor 208.
In some embodiments, processor 200 additionally includes graphics processor 208 to execute graphics processing operations. In some embodiments, the graphics processor 208 couples with the set of shared cache units 206, and the system agent core 210, including the one or more integrated memory controllers 214. In some embodiments, the system agent core 210 also includes a display controller 211 to drive graphics processor output to one or more coupled displays. In some embodiments, display controller 211 may also be a separate module coupled with the graphics processor via at least one interconnect, or may be integrated within the graphics processor 208.
In some embodiments, a ring-based interconnect 212 is used to couple the internal components of the processor 200. However, an alternative interconnect unit may be used, such as a point-to-point interconnect, a switched interconnect, a mesh interconnect, or other techniques, including techniques well known in the art. In some embodiments, graphics processor 208 couples with the ring-based interconnect 212 via an I/O link 213.
The exemplary I/O link 213 represents at least one of multiple varieties of I/O interconnects, including an on package I/O interconnect which facilitates communication between various processor components and a high-performance embedded memory module 218, such as an eDRAM module or a high-bandwidth memory (HBM) module. In some embodiments, each of the processor cores 202A-202N and graphics processor 208 can use the embedded memory module 218 as a shared Last Level Cache.
In some embodiments, processor cores 202A-202N are homogenous cores executing the same instruction set architecture. In another embodiment, processor cores 202A-202N are heterogeneous in terms of instruction set architecture (ISA), where one or more of processor cores 202A-202N execute a first instruction set, while at least one of the other cores executes a subset of the first instruction set or a different instruction set. In one embodiment, processor cores 202A-202N are heterogeneous in terms of microarchitecture, where one or more cores having a relatively higher power consumption couple with one or more power cores having a lower power consumption. In one embodiment, processor cores 202A-202N are heterogeneous in terms of computational capability. Additionally, processor 200 can be implemented on one or more chips or as an SoC integrated circuit having the illustrated components, in addition to other components.
In some embodiments, the function block 230 includes a geometry/fixed function pipeline 231 that can be shared by all graphics cores in the graphics processor core block 219. In various embodiments, the geometry/fixed function pipeline 231 includes a 3D geometry pipeline a video front-end unit, a thread spawner and global thread dispatcher, and a unified return buffer manager, which manages unified return buffers. In one embodiment the function block 230 also includes a graphics SoC interface 232, a graphics microcontroller 233, and a media pipeline 234. The graphics SoC interface 232 provides an interface between the graphics processor core block 219 and other core blocks within a graphics processor or compute accelerator SoC. The graphics microcontroller 233 is a programmable sub-processor that is configurable to manage various functions of the graphics processor core block 219, including thread dispatch, scheduling, and pre-emption. The media pipeline 234 includes logic to facilitate the decoding, encoding, preprocessing, and/or post-processing of multimedia data, including image and video data. The media pipeline 234 implement media operations via requests to compute or sampling logic within the graphics cores 221-221F. One or more pixel backends 235 can also be included within the function block 230. The pixel backends 235 include a cache memory to store pixel color values and can perform blend operations and lossless color compression of rendered pixel data.
In one embodiment the graphics SoC interface 232 enables the graphics processor core block 219 to communicate with general-purpose application processor cores (e.g., CPUs) and/or other components within an SoC or a system host CPU that is coupled with the SoC via a peripheral interface. The graphics SoC interface 232 also enables communication with off-chip memory hierarchy elements such as a shared last level cache memory, system RAM, and/or embedded on-chip or on-package DRAM. The SoC interface 232 can also enable communication with fixed function devices within the SoC, such as camera imaging pipelines, and enables the use of and/or implements global memory atomics that may be shared between the graphics processor core block 219 and CPUs within the SoC. The graphics SoC interface 232 can also implement power management controls for the graphics processor core block 219 and enable an interface between a clock domain of the graphics processor core block 219 and other clock domains within the SoC. In one embodiment the graphics SoC interface 232 enables receipt of command buffers from a command streamer and global thread dispatcher that are configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. The commands and instructions can be dispatched to the media pipeline 234 when media operations are to be performed, the geometry and fixed function pipeline 231 when graphics processing operations are to be performed. When compute operations are to be performed, compute dispatch logic can dispatch the commands to the graphics cores 221A-221F, bypassing the geometry and media pipelines.
The graphics microcontroller 233 can be configured to perform various scheduling and management tasks for the graphics processor core block 219. In one embodiment the graphics microcontroller 233 can perform graphics and/or compute workload scheduling on the various vector engines 222A-222F, 224A-224F and matrix engines 223A-223F, 225A-225F within the graphics cores 221A-221F. In this scheduling model, host software executing on a CPU core of an SoC including the graphics processor core block 219 can submit workloads to one of multiple graphics processor doorbells, which invokes a scheduling operation on the appropriate graphics engine. Scheduling operations include determining which workload to run next, submitting a workload to a command streamer, pre-empting existing workloads running on an engine, monitoring progress of a workload, and notifying host software when a workload is complete. In one embodiment the graphics microcontroller 233 can also facilitate low-power or idle states for the graphics processor core block 219, providing the graphics processor core block 219 with the ability to save and restore registers within the graphics processor core block 219 across low-power state transitions independently from the operating system and/or graphics driver software on the system.
The graphics processor core block 219 may have greater than or fewer than the illustrated graphics cores 221A-221F, up to N modular graphics cores. For each set of N graphics cores, the graphics processor core block 219 can also include shared/cache memory 236, which can be configured as shared memory or cache memory, rasterizer logic 237, and additional fixed function logic 238 to accelerate various graphics and compute processing operations.
Within each graphics cores 221A-221F is set of execution resources that may be used to perform graphics, media, and compute operations in response to requests by graphics pipeline, media pipeline, or shader programs. The graphics cores 221A-221F include multiple vector engines 222A-222F, 224A-224F, matrix acceleration units 223A-223F, 225A-225D, cache/shared local memory (SLM), a sampler 226A-226F, and a ray tracing unit 227A-227F.
The vector engines 222A-222F, 224A-224F are general-purpose graphics processing units capable of performing floating-point and integer/fixed-point logic operations in service of a graphics, media, or compute operation, including graphics, media, or compute/GPGPU programs. The vector engines 222A-222F, 224A-224F can operate at variable vector widths using SIMD, SIMT, or SIMT+SIMD execution modes. The matrix acceleration units 223A-223F, 225A-225D include matrix-matrix and matrix-vector acceleration logic that improves performance on matrix operations, particularly low and mixed precision (e.g., INT8, FP16, BF16) matrix operations used for machine learning. In one embodiment, each of the matrix acceleration units 223A-223F, 225A-225D includes one or more systolic arrays of processing elements that can perform concurrent matrix multiply or dot product operations on matrix elements.
The sampler 226A-226F can read media or texture data into memory and can sample data differently based on a configured sampler state and the texture/media format that is being read. Threads executing on the vector engines 222A-222F, 224A-224F or matrix acceleration units 223A-223F, 225A-225D can make use of the cache/SLM 228A-228F within each execution core. The cache/SLM 228A-228F can be configured as cache memory or as a pool of shared memory that is local to each of the respective graphics cores 221A-221F. The ray tracing units 227A-227F within the graphics cores 221A-221F include ray traversal/intersection circuitry for performing ray traversal using bounding volume hierarchies (BVHs) and identifying intersections between rays and primitives enclosed within the BVH volumes. In one embodiment the ray tracing units 227A-227F include circuitry for performing depth testing and culling (e.g., using a depth buffer or similar arrangement). In one implementation, the ray tracing units 227A-227F perform traversal and intersection operations in concert with image denoising, at least a portion of which may be performed using an associated matrix acceleration unit 223A-223F, 225A-225D.
As illustrated, a multi-core group 240A may include a set of graphics cores 243, a set of tensor cores 244, and a set of ray tracing cores 245. A scheduler/dispatcher 241 schedules and dispatches the graphics threads for execution on the various cores 243, 244, 245. In one embodiment the tensor cores 244 are sparse tensor cores with hardware to enable multiplication operations having a zero-value input to be bypassed. The graphics cores 243 of the GPU 239 of
A set of register files 242 can store operand values used by the cores 243, 244, 245 when executing the graphics threads. These may include, for example, integer registers for storing integer values, floating point registers for storing floating point values, vector registers for storing packed data elements (integer and/or floating-point data elements) and tile registers for storing tensor/matrix values. In one embodiment, the tile registers are implemented as combined sets of vector registers.
One or more combined level 1 (L1) caches and shared memory units 247 store graphics data such as texture data, vertex data, pixel data, ray data, bounding volume data, etc., locally within each multi-core group 240A. One or more texture units 247 can also be used to perform texturing operations, such as texture mapping and sampling. A Level 2 (L2) cache 253 shared by all or a subset of the multi-core groups 240A-240N stores graphics data and/or instructions for multiple concurrent graphics threads. As illustrated, the L2 cache 253 may be shared across a plurality of multi-core groups 240A-240N. One or more memory controllers 248 couple the GPU 239 to a memory 249 which may be a system memory (e.g., DRAM) and/or a dedicated graphics memory (e.g., GDDR6 memory).
Input/output (I/O) circuitry 250 couples the GPU 239 to one or more I/O devices 252 such as digital signal processors (DSPs), network controllers, or user input devices. An on-chip interconnect may be used to couple the I/O devices 252 to the GPU 239 and memory 249. One or more I/O memory management units (IOMMUs) 251 of the I/O circuitry 250 couple the I/O devices 252 directly to the memory 249. In one embodiment, the IOMMU 251 manages multiple sets of page tables to map virtual addresses to physical addresses in memory 249. In this embodiment, the I/O devices 252, CPU(s) 246, and GPU 239 may share the same virtual address space.
In one implementation, the IOMMU 251 supports virtualization. In this case, it may manage a first set of page tables to map guest/graphics virtual addresses to guest/graphics physical addresses and a second set of page tables to map the guest/graphics physical addresses to system/host physical addresses (e.g., within memory 249). The base addresses of each of the first and second sets of page tables may be stored in control registers and swapped out on a context switch (e.g., so that the new context is provided with access to the relevant set of page tables). While not illustrated in
In one embodiment, the CPUs 246, GPU 239, and I/O devices 252 are integrated on a single semiconductor chip and/or chip package. The memory 249 may be integrated on the same chip or may be coupled to the memory controllers 248 via an off-chip interface. In one implementation, the memory 249 comprises GDDR6 memory which shares the same virtual address space as other physical system-level memories, although the underlying principles of the embodiments described herein are not limited to this specific implementation.
In one embodiment, the tensor cores 244 include a plurality of functional units specifically designed to perform matrix operations, which are the fundamental compute operation used to perform deep learning operations. For example, simultaneous matrix multiplication operations may be used for neural network training and inferencing. The tensor cores 244 may perform matrix processing using a variety of operand precisions including single precision floating-point (e.g., 32 bits), half-precision floating point (e.g., 16 bits), integer words (16 bits), bytes (8 bits), and half-bytes (4 bits). In one embodiment, a neural network implementation extracts features of each rendered scene, potentially combining details from multiple frames, to construct a high-quality final image.
In deep learning implementations, parallel matrix multiplication work may be scheduled for execution on the tensor cores 244. The training of neural networks, in particular, requires a significant number of matrix dot product operations. In order to process an inner-product formulation of an N x N x N matrix multiply, the tensor cores 244 may include at least N dot-product processing elements. Before the matrix multiply begins, one entire matrix is loaded into tile registers and at least one column of a second matrix is loaded each cycle for N cycles. Each cycle, there are N dot products that are processed.
Matrix elements may be stored at different precisions depending on the particular implementation, including 16-bit words, 8-bit bytes (e.g., INT8) and 4-bit half-bytes (e.g., INT4). Different precision modes may be specified for the tensor cores 244 to ensure that the most efficient precision is used for different workloads (e.g., such as inferencing workloads which can tolerate quantization to bytes and half-bytes).
In one embodiment, the ray tracing cores 245 accelerate ray tracing operations for both real-time ray tracing and non-real-time ray tracing implementations. In particular, the ray tracing cores 245 include ray traversal/intersection circuitry for performing ray traversal using bounding volume hierarchies (BVHs) and identifying intersections between rays and primitives enclosed within the BVH volumes. The ray tracing cores 245 may also include circuitry for performing depth testing and culling (e.g., using a Z buffer or similar arrangement). In one implementation, the ray tracing cores 245 perform traversal and intersection operations in concert with the image denoising techniques described herein, at least a portion of which may be executed on the tensor cores 244. For example, in one embodiment, the tensor cores 244 implement a deep learning neural network to perform denoising of frames generated by the ray tracing cores 245. However, the CPU(s) 246, graphics cores 243, and/or ray tracing cores 245 may also implement all or a portion of the denoising and/or deep learning algorithms.
In addition, as described above, a distributed approach to denoising may be employed in which the GPU 239 is in a computing device coupled to other computing devices over a network or high-speed interconnect. In this embodiment, the interconnected computing devices share neural network learning/training data to improve the speed with which the overall system learns to perform denoising for different types of image frames and/or different graphics applications.
In one embodiment, the ray tracing cores 245 process all BVH traversal and ray-primitive intersections, saving the graphics cores 243 from being overloaded with thousands of instructions per ray. In one embodiment, each ray tracing core 245 includes a first set of specialized circuitry for performing bounding box tests (e.g., for traversal operations) and a second set of specialized circuitry for performing the ray-triangle intersection tests (e.g., intersecting rays which have been traversed). Thus, in one embodiment, the multi-core group 240A can simply launch a ray probe, and the ray tracing cores 245 independently perform ray traversal and intersection and return hit data (e.g., a hit, no hit, multiple hits, etc.) to the thread context. The other cores 243, 244 are freed to perform other graphics or compute work while the ray tracing cores 245 perform the traversal and intersection operations.
In one embodiment, each ray tracing core 245 includes a traversal unit to perform BVH testing operations and an intersection unit which performs ray-primitive intersection tests. The intersection unit generates a “hit”, “no hit”, or “multiple hit” response, which it provides to the appropriate thread. During the traversal and intersection operations, the execution resources of the other cores (e.g., graphics cores 243 and tensor cores 244) are freed to perform other forms of graphics work.
In one particular embodiment described below, a hybrid rasterization/ray tracing approach is used in which work is distributed between the graphics cores 243 and ray tracing cores 245.
In one embodiment, the ray tracing cores 245 (and/or other cores 243, 244) include hardware support for a ray tracing instruction set such as Microsoft’s DirectX Ray Tracing (DXR) which includes a DispatchRays command, as well as ray-generation, closest-hit, any-hit, and miss shaders, which enable the assignment of unique sets of shaders and textures for each object. Another ray tracing platform which may be supported by the ray tracing cores 245, graphics cores 243 and tensor cores 244 is Vulkan 1.1.85. Note, however, that the underlying principles of the embodiments described herein are not limited to any particular ray tracing ISA.
In general, the various cores 245, 244, 243 may support a ray tracing instruction set that includes instructions/functions for ray generation, closest hit, any hit, ray-primitive intersection, per-primitive and hierarchical bounding box construction, miss, visit, and exceptions. More specifically, one embodiment includes ray tracing instructions to perform the following functions:
Ray Generation - Ray generation instructions may be executed for each pixel, sample, or other user-defined work assignment.
Closest Hit - A closest hit instruction may be executed to locate the closest intersection point of a ray with primitives within a scene.
Any Hit - An any hit instruction identifies multiple intersections between a ray and primitives within a scene, potentially to identify a new closest intersection point.
Intersection - An intersection instruction performs a ray-primitive intersection test and outputs a result.
Per-primitive Bounding box Construction - This instruction builds a bounding box around a given primitive or group of primitives (e.g., when building a new BVH or other acceleration data structure).
Miss - Indicates that a ray misses all geometry within a scene, or specified region of a scene.
Visit - Indicates the child volumes a ray will traverse.
Exceptions - Includes various types of exception handlers (e.g., invoked for various error conditions).
In one embodiment the ray tracing cores 245 may be adapted to accelerate general-purpose compute operations that can be accelerated using computational techniques that are analogous to ray intersection tests. A compute framework can be provided that enables shader programs to be compiled into low level instructions and/or primitives that perform general-purpose compute operations via the ray tracing cores. Exemplary computational problems that can benefit from compute operations performed on the ray tracing cores 245 include computations involving beam, wave, ray, or particle propagation within a coordinate space. Interactions associated with that propagation can be computed relative to a geometry or mesh within the coordinate space. For example, computations associated with electromagnetic signal propagation through an environment can be accelerated via the use of instructions or primitives that are executed via the ray tracing cores. Diffraction and reflection of the signals by objects in the environment can be computed as direct ray-tracing analogies.
Ray tracing cores 245 can also be used to perform computations that are not directly analogous to ray tracing. For example, mesh projection, mesh refinement, and volume sampling computations can be accelerated using the ray tracing cores 245. Generic coordinate space calculations, such as nearest neighbor calculations can also be performed. For example, the set of points near a given point can be discovered by defining a bounding box in the coordinate space around the point. BVH and ray probe logic within the ray tracing cores 245 can then be used to determine the set of point intersections within the bounding box. The intersections constitute the origin point and the nearest neighbors to that origin point. Computations that are performed using the ray tracing cores 245 can be performed in parallel with computations performed on the graphics cores 243 and tensor cores 244. A shader compiler can be configured to compile a compute shader or other general-purpose graphics processing program into low level primitives that can be parallelized across the graphics cores 243, tensor cores 244, and ray tracing cores 245.
The GPGPU 270 includes multiple cache memories, including an L2 cache 253, L1 cache 254, an instruction cache 255, and shared memory 256, at least a portion of which may also be partitioned as a cache memory. The GPGPU 270 also includes multiple compute units 260A-260N, which represent a hierarchical abstraction level analogous to the graphics cores 221A-221F of
During operation, the one or more CPU(s) 246 can write commands into registers or memory in the GPGPU 270 that has been mapped into an accessible address space. The command processors 257 can read the commands from registers or memory and determine how those commands will be processed within the GPGPU 270. A thread dispatcher 258 can then be used to dispatch threads to the compute units 260A-260N to perform those commands. Each compute unit 260A-260N can execute threads independently of the other compute units. Additionally, each compute unit 260A-260N can be independently configured for conditional computation and can conditionally output the results of computation to memory. The command processors 257 can interrupt the one or more CPU(s) 246 when the submitted commands are complete.
In some embodiments, graphics processor 300 also includes a display controller 302 to drive display output data to a display device 318. Display controller 302 includes hardware for one or more overlay planes for the display and composition of multiple layers of video or user interface elements. The display device 318 can be an internal or external display device. In one embodiment the display device 318 is a head mounted display device, such as a virtual reality (VR) display device or an augmented reality (AR) display device. In some embodiments, graphics processor 300 includes a video codec engine 306 to encode, decode, or transcode media to, from, or between one or more media encoding formats, including, but not limited to Moving Picture Experts Group (MPEG) formats such as MPEG-2, Advanced Video Coding (AVC) formats such as H.264/MPEG-4 AVC, H.265/HEVC, Alliance for Open Media (AOMedia) VP8, VP9, as well as the Society of Motion Picture & Television Engineers (SMPTE) 421M/VC-1, and Joint Photographic Experts Group (JPEG) formats such as JPEG, and Motion JPEG (MJPEG) formats.
In some embodiments, graphics processor 300 includes a block image transfer (BLIT) engine to perform two-dimensional (2D) rasterizer operations including, for example, bit-boundary block transfers. However, in one embodiment, 2D graphics operations are performed using one or more components of graphics processing engine (GPE) 310. In some embodiments, GPE 310 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.
In some embodiments, GPE 310 includes a 3D pipeline 312 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that act upon 3D primitive shapes (e.g., rectangle, triangle, etc.). The 3D pipeline 312 includes programmable and fixed function elements that perform various tasks within the element and/or spawn execution threads to a 3D/Media subsystem 315. While 3D pipeline 312 can be used to perform media operations, an embodiment of GPE 310 also includes a media pipeline 316 that is specifically used to perform media operations, such as video post-processing and image enhancement.
In some embodiments, media pipeline 316 includes fixed function or programmable logic units to perform one or more specialized media operations, such as video decode acceleration, video de-interlacing, and video encode acceleration in place of, or on behalf of video codec engine 306. In some embodiments, media pipeline 316 additionally includes a thread spawning unit to spawn threads for execution on 3D/Media subsystem 315. The spawned threads perform computations for the media operations on one or more graphics cores included in 3D/Media subsystem 315.
In some embodiments, 3D/Media subsystem 315 includes logic for executing threads spawned by 3D pipeline 312 and media pipeline 316. In one embodiment, the pipelines send thread execution requests to 3D/Media subsystem 315, which includes thread dispatch logic for arbitrating and dispatching the various requests to available thread execution resources. The execution resources include an array of graphics cores to process the 3D and media threads. In some embodiments, 3D/Media subsystem 315 includes one or more internal caches for thread instructions and data. In some embodiments, the subsystem also includes shared memory, including registers and addressable memory, to share data between threads and to store output data.
The graphics processor 320 may be configured with a non-uniform memory access (NUMA) systemin which memory devices 326A-326D are coupled with associated graphics engine tiles 310A-310D. A given memory device may be accessed by graphics engine tiles other than the tile to which it is directly connected. However, access latency to the memory devices 326A-326D may be lowest when accessing a local tile. In one embodiment, a cache coherent NUMA (ccNUMA) system is enabled that uses the tile interconnects 323A-323F to enable communication between cache controllers within the graphics engine tiles 310A-310D to maintain a consistent memory image when more than one cache stores the same memory location.
The graphics processing engine cluster 322 can connect with an on-chip or on-package fabric interconnect 324. In one embodiment the fabric interconnect 324 includes a network processor, network on a chip (NoC), or another switching processor to enable the fabric interconnect 324 to act as a packet switched fabric interconnect that switches data packets between components of the graphics processor 320. The fabric interconnect 324 can enable communication between graphics engine tiles 310A-310D and components such as the video codec engine 306 and one or more copy engines 304. The copy engines 304 can be used to move data out of, into, and between the memory devices 326A-326D and memory that is external to the graphics processor 320 (e.g., system memory). The fabric interconnect 324 can also couple with one or more of the tile interconnects 323A-323F to facilitate or enhance the interconnection between the graphics engine tiles 310A-310D. The fabric interconnect 324 is also configurable to interconnect multiple instances of the graphics processor 320 (e.g., via the host interface 328), enabling tile-to-tile communication between graphics engine tiles 310A-310D of multiple GPUs. In one embodiment, the graphics engine tiles 310A-310D of multiple GPUs can be presented to a host system as a single logical device.
The graphics processor 320 may optionally include a display controller 302 to enable a connection with the display device 318. The graphics processor may also be configured as a graphics or compute accelerator. In the accelerator configuration, the display controller 302 and display device 318 may be omitted.
The graphics processor 320 can connect to a host system via a host interface 328. The host interface 328 can enable communication between the graphics processor 320, system memory, and/or other system components. The host interface 328 can be, for example a PCI express bus or another type of host system interface. For example, the host interface 328 may be an NVLink or NVSwitch interface. The host interface 328 and fabric interconnect 324 can cooperate to enable multiple instances of the graphics processor 320 to act as single logical device. Cooperation between the host interface 328 and fabric interconnect 324 can also enable the individual graphics engine tiles 310A-310D to be presented to the host system as distinct logical graphics devices.
The compute accelerator 330 can also include an integrated network interface 342. In one embodiment the network interface 342 includes a network processor and controller logic that enables the compute engine cluster 332 to communicate over a physical layer interconnect 344 without requiring data to traverse memory of a host system. In one embodiment, one of the compute engine tiles 340A-340D is replaced by network processor logic and data to be transmitted or received via the physical layer interconnect 344 may be transmitted directly to or from memory 326A-326D. Multiple instances of the compute accelerator 330 may be joined via the physical layer interconnect 344 into a single logical device. Alternatively, the various compute engine tiles 340A-340D may be presented as distinct network accessible compute accelerator devices.
In some embodiments, GPE 410 couples with or includes a command streamer 403, which provides a command stream to the 3D pipeline 312 and/or media pipelines 316. Alternatively or additionally, the command streamer 403 may be directly coupled to a unified return buffer 418. The unified return buffer 418 may be communicatively coupled to a graphics core cluster 414. In some embodiments, command streamer 403 is coupled with memory, which can be system memory, or one or more of internal cache memory and shared cache memory. In some embodiments, command streamer 403 receives commands from the memory and sends the commands to 3D pipeline 312 and/or media pipeline 316. The commands are directives fetched from a ring buffer, which stores commands for the 3D pipeline 312 and media pipeline 316. In one embodiment, the ring buffer can additionally include batch command buffers storing batches of multiple commands. The commands for the 3D pipeline 312 can also include references to data stored in memory, such as but not limited to vertex and geometry data for the 3D pipeline 312 and/or image data and memory objects for the media pipeline 316. The 3D pipeline 312 and media pipeline 316 process the commands and data by performing operations via logic within the respective pipelines or by dispatching one or more execution threads to a graphics core cluster 414. In one embodiment the graphics core cluster 414 include one or more blocks of graphics cores (e.g., graphics core block 415A, graphics core block 415B), each block including one or more graphics cores. Each graphics core includes a set of graphics execution resources that includes general-purpose and graphics specific execution logic to perform graphics and compute operations, as well as fixed function texture processing and/or machine learning and artificial intelligence acceleration logic, such as matrix or AI acceleration logic.
In various embodiments the 3D pipeline 312 can include fixed function and programmable logic to process one or more shader programs, such as vertex shaders, geometry shaders, pixel shaders, fragment shaders, compute shaders, or other shader and/or GPGPU programs, by processing the instructions and dispatching execution threads to the graphics core cluster 414. The graphics core cluster 414 provides a unified block of execution resources for use in processing these shader programs. Multi-purpose execution logic within the graphics core blocks 415A-415B of the graphics core cluster 414 includes support for various 3D API shader languages and can execute multiple simultaneous execution threads associated with multiple shaders.
In some embodiments, the graphics core cluster 414 includes execution logic to perform media functions, such as video and/or image processing. In one embodiment, the graphics cores include general-purpose logic that is programmable to perform parallel general-purpose computational operations, in addition to graphics processing operations. The general-purpose logic can perform processing operations in parallel or in conjunction with general-purpose logic within the processor core(s) 107 of
Output data generated by threads executing on the graphics core cluster 414 can output data to memory in a unified return buffer (URB) 418. The URB 418 can store data for multiple threads. In some embodiments the URB 418 may be used to send data between different threads executing on the graphics core cluster 414. In some embodiments the URB 418 may additionally be used for synchronization between threads on the graphics core array and fixed function logic within the shared function logic 420.
In some embodiments, graphics core cluster 414 is scalable, such that the cluster includes a variable number of graphics cores, each having a variable number of graphics cores based on the target power and performance level of GPE 410. In one embodiment the execution resources are dynamically scalable, such that execution resources may be enabled or disabled as needed.
The graphics core cluster 414 couples with shared function logic 420 that includes multiple resources that are shared between the graphics cores in the graphics core array. The shared functions within the shared function logic 420 are hardware logic units that provide specialized supplemental functionality to the graphics core cluster 414. In various embodiments, shared function logic 420 may include, but is not limited to sampler 421, math 422, and inter-thread communication (ITC) 423 logic. Additionally, some embodiments implement one or more cache(s) 425 within the shared function logic 420. The shared function logic 420 can implement the same or similar functionality as the additional fixed function logic 238 of
A shared function is implemented at least in a case where the demand for a given specialized function is insufficient for inclusion within the graphics core cluster 414. Instead, a single instantiation of that specialized function is implemented as a stand-alone entity in the shared function logic 420 and shared among the execution resources within the graphics core cluster 414. The precise set of functions that are shared between the graphics core cluster 414 and included within the graphics core cluster 414 varies across embodiments. In some embodiments, specific shared functions within the shared function logic 420 that are used extensively by the graphics core cluster 414 may be included within shared function logic 416 within the graphics core cluster 414. In various embodiments, the shared function logic 416 within the graphics core cluster 414 can include some or all logic within the shared function logic 420. In one embodiment, all logic elements within the shared function logic 420 may be duplicated within the shared function logic 416 of the graphics core cluster 414. In one embodiment the shared function logic 420 is excluded in favor of the shared function logic 416 within the graphics core cluster 414.
As shown in
With reference to graphics core 515A, the vector engine 502A and matrix engine 503A are configurable to perform parallel compute operations on data in a variety of integer and floating-point data formats based on instructions associated with shader programs. Each vector engine 502A and matrix engine 503A can act as a programmable general-purpose computational unit that is capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. The vector engine 502A and matrix engine 503A support the processing of variable width vectors at various SIMD widths, including but not limited to SIMD8, SIMD16, and SIMD32. Input data elements can be stored as a packed data type in a register and the vector engine 502A and matrix engine 503A can process the various elements based on the data size of the elements. For example, when operating on a 256-bit wide vector, the 256 bits of the vector are stored in a register and the vector is processed as four separate 64-bit packed data elements (Quad-Word (QW) size data elements), eight separate 32-bit packed data elements (Double Word (DW) size data elements), sixteen separate 16-bit packed data elements (Word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, different vector widths and register sizes are possible. In one embodiment, the vector engine 502A and matrix engine 503A are also configurable for SIMT operation on warps or thread groups of various sizes (e.g., 8, 16, or 32 threads).
Continuing with graphics core 515A, the memory load/store unit 504A services memory access requests that are issued by the vector engine 502A, matrix engine 503A, and/or other components of the graphics core 515A that have access to memory. The memory access request can be processed by the memory load/store unit 504A to load or store the requested data to or from cache or memory into a register file associated with the vector engine 502A and/or matrix engine 503A. The memory load/store unit 504A can also perform prefetching operations. In one embodiment, the memory load/store unit 504A is configured to provide SIMT scatter/gather prefetching or block prefetching for data stored in memory 610, from memory that is local to other tiles via the tile interconnect 608, or from system memory. Prefetching can be performed to a specific L1 cache (e.g., data cache/shared local memory 506A), the L2 cache 604 or the L3 cache 606. In one embodiment, a prefetch to the L3 cache 606 automatically results in the data being stored in the L2 cache 604.
The instruction cache 505A stores instructions to be executed by the graphics core 515A. In one embodiment, the graphics core 515A also includes instruction fetch and prefetch circuitry that fetches or prefetches instructions into the instruction cache 505A. The graphics core 515A also includes instruction decode logic to decode instructions within the instruction cache 505A. The data cache/shared local memory 506A can be configured as a data cache that is managed by a cache controller that implements a cache replacement policy and/or configured as explicitly managed shared memory. The ray tracing unit 508A includes circuitry to accelerate ray tracing operations. The sampler 510A provides texture sampling for 3D operations and media sampling for media operations. The fixed function logic 512A includes fixed function circuitry that is shared between the various instances of the vector engine 502A and matrix engine 503A. Graphics cores 515B-515N can operate in a similar manner as graphics core 515A.
Functionality of the instruction caches 505A-505N, data caches/shared local memory 506A-506N, ray tracing units 508A-508N, samplers 510A-2710N, and fixed function logic 512A-512N corresponds with equivalent functionality in the graphics processor architectures described herein. For example, the instruction caches 505A-505N can operate in a similar manner as instruction cache 255 of
As shown in
In one embodiment the vector engine 502 has an architecture that is a combination of Simultaneous Multi-Threading (SMT) and fine-grained Interleaved Multi-Threading (IMT). The architecture has a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and number of registers per graphics core, where graphics core resources are divided across logic used to execute multiple simultaneous threads. The number of logical threads that may be executed by the vector engine 502 is not limited to the number of hardware threads, and multiple logical threads can be assigned to each hardware thread.
In one embodiment, the vector engine 502 can co-issue multiple instructions, which may each be different instructions. The thread arbiter 522 can dispatch the instructions to one of the send unit 530, branch unit 532, or SIMD FPU(s) 534 for execution. Each execution thread can access 128 general-purpose registers within the GRF 524, where each register can store 32 bytes, accessible as a variable width vector of 32-bit data elements. In one embodiment, each thread has access to 4 Kbytes within the GRF 524, although embodiments are not so limited, and greater or fewer register resources may be provided in other embodiments. In one embodiment the vector engine 502 is partitioned into seven hardware threads that can independently perform computational operations, although the number of threads per vector engine 502 can also vary according to embodiments. For example, in one embodiment up to 16 hardware threads are supported. In an embodiment in which seven threads may access 4 Kbytes, the GRF 524 can store a total of 28 Kbytes. Where 16 threads may access 4 Kbytes, the GRF 524 can store a total of 64 Kbytes. Flexible addressing modes can permit registers to be addressed together to build effectively wider registers or to represent strided rectangular block data structures.
In one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via “send” instructions that are executed by the message passing send unit 530. In one embodiment, branch instructions are dispatched to a dedicated branch unit 532 to facilitate SIMD divergence and eventual convergence.
In one embodiment the vector engine 502 includes one or more SIMD floating point units (FPU(s)) 534 to perform floating-point operations. In one embodiment, the FPU(s) 534 also support integer computation. In one embodiment the FPU(s) 534 can execute up to M number of 32-bit floating-point (or integer) operations, or execute up to 2 M 16-bit integer or 16-bit floating-point operations. In one embodiment, at least one of the FPU(s) provides extended math capability to support high-throughput transcendental math functions and double precision 64-bit floating-point. In some embodiments, a set of 8-bit integer SIMD ALUs 535 are also present and may be specifically optimized to perform operations associated with machine learning computations. In one embodiment, the SIMD ALUs are replaced by an additional set of SIMD FPUs 534 that are configurable to perform integer and floating-point operations. In one embodiment, the SIMD FPUs 534 and SIMD ALUs 535 are configurable to execute SIMT programs. In one embodiment, combined SIMD+SIMT operation is supported.
In one embodiment, arrays of multiple instances of the vector engine 502 can be instantiated in a graphics core. For scalability, product architects can choose the exact number of vector engines per graphics core grouping. In one embodiment the vector engine 502 can execute instructions across a plurality of execution channels. In a further embodiment, each thread executed on the vector engine 502 is executed on a different channel.
As shown in
In one embodiment, during each cycle, each stage can add the result of operations performed at that stage to the output of the previous stage. In other embodiments, the pattern of data movement between the processing elements 552AA-552MN after a set of computational cycles can vary based on the instruction or macro-operation being performed. For example, in one embodiment partial sum loopback is enabled and the processing elements may instead add the output of a current cycle with output generated in the previous cycle. In one embodiment, the final stage of the systolic array can be configured with a loopback to the initial stage of the systolic array. In such embodiment, the number of physical pipeline stages may be decoupled from the number of logical pipeline stages that are supported by the matrix engine 503. For example, where the processing elements 552AA-552MN are configured as a systolic array of M physical stages, a loopback from stage M to the initial pipeline stage can enable the processing elements 552AA-552MN to operate as a systolic array of, for example, 2M, 3M, 4M, etc., logical pipeline stages.
In one embodiment, the matrix engine 503 includes memory 541A-541N, 542A-542M to store input data in the form of row and column data for input matrices. Memory 542A-542M is configurable to store row elements (A0-Am) of a first input matrix and memory 541A-541N is configurable to store column elements (B0-Bn) of a second input matrix. The row and column elements are provided as input to the processing elements 552AA-552MN for processing. In one embodiment, row and column elements of the input matrices can be stored in a systolic register file 540 within the matrix engine 503 before those elements are provided to the memory 541A-541N, 542A-542M. In one embodiment, the systolic register file 540 is excluded and the memory 541A-541N, 542A-542M is loaded from registers in an associated vector engine (e.g., GRF 524 of vector engine 502 of
In some embodiments, the matrix engine 503 is configured with support for input sparsity, where multiplication operations for sparse regions of input data can be bypassed by skipping multiply operations that have a zero-value operand. In one embodiment, the processing elements 552AA-552MN are configured to skip the performance of certain operations that have zero value input. In one embodiment, sparsity within input matrices can be detected and operations having known zero output values can be bypassed before being submitted to the processing elements 552AA-552MN. The loading of zero value operands into the processing elements can be bypassed and the processing elements 552AA-552MN can be configured to perform multiplications on the non-zero value input elements. The matrix engine 503 can also be configured with support for output sparsity, such that operations with results that are pre-determined to be zero are bypassed. For input sparsity and/or output sparsity, in one embodiment, metadata is provided to the processing elements 552AA-552MN to indicate, for a processing cycle, which processing elements and/or data channels are to be active during that cycle.
In one embodiment, the matrix engine 503 includes hardware to enable operations on sparse data having a compressed representation of a sparse matrix that stores non-zero values and metadata that identifies the positions of the non-zero values within the matrix. Exemplary compressed representations include but are not limited to compressed tensor representations such as compressed sparse row (CSR), compressed sparse column (CSC), compressed sparse fiber (CSF) representations. Support for compressed representations enable operations to be performed on input in a compressed tensor format without requiring the compressed representation to be decompressed or decoded. In such embodiment, operations can be performed only on non-zero input values and the resulting non-zero output values can be mapped into an output matrix. In some embodiments, hardware support is also provided for machine-specific lossless data compression formats that are used when transmitting data within hardware or across system busses. Such data may be retained in a compressed format for sparse input data and the matrix engine 503 can use the compression metadata for the compressed data to enable operations to be performed on only non-zero values, or to enable blocks of zero data input to be bypassed for multiply operations.
In various embodiments, input data can be provided by a programmer in a compressed tensor representation, or a codec can compress input data into the compressed tensor representation or another sparse data encoding. In addition to support for compressed tensor representations, streaming compression of sparse input data can be performed before the data is provided to the processing elements 552AA-552MN. In one embodiment, compression is performed on data written to a cache memory associated with the graphics core cluster 414, with the compression being performed with an encoding that is supported by the matrix engine 503. In one embodiment, the matrix engine 503 includes support for input having structured sparsity in which a pre-determined level or pattern of sparsity is imposed on input data. This data may be compressed to a known compression ratio, with the compressed data being processed by the processing elements 552AA-552MN according to metadata associated with the compressed data.
The tile 600 can include or couple with an L3 cache 606 and memory 610. In various embodiments, the L3 cache 606 may be excluded or the tile 600 can include additional levels of cache, such as an L4 cache. In one embodiment, each instance of the tile 600 in the multi-tile graphics processor has an associated memory 610, such as in
A memory fabric 603 enables communication among the graphics core clusters 414A-414N, L3 cache 606, and memory 610. An L2 cache 604 couples with the memory fabric 603 and is configurable to cache transactions performed via the memory fabric 603. A tile interconnect 608 enables communication with other tiles on the graphics processors and may be one of tile interconnects 323A-323F of
In some embodiments, the graphics processor natively supports instructions in a 128-bit instruction format 710. A 64-bit compacted instruction format 730 is available for some instructions based on the selected instruction, instruction options, and number of operands. The native 128-bit instruction format 710 provides access to all instruction options, while some options and operations are restricted in the 64-bit format 730. The native instructions available in the 64-bit format 730 vary by embodiment. In some embodiments, the instruction is compacted in part using a set of index values in an index field 713. The graphics core hardware references a set of compaction tables based on the index values and uses the compaction table outputs to reconstruct a native instruction in the 128-bit instruction format 710. Other sizes and formats of instruction can be used.
For each format, instruction opcode 712 defines the operation that the graphics core is to perform. The graphics cores execute each instruction in parallel across the multiple data elements of each operand. For example, in response to an add instruction the graphics core performs a simultaneous add operation across each color channel representing a texture element or picture element. By default, the graphics core performs each instruction across all data channels of the operands. In some embodiments, instruction control field 714 enables control over certain execution options, such as channels selection (e.g., predication) and data channel order (e.g., swizzle). For instructions in the 128-bit instruction format 710 an exec-size field 716 limits the number of data channels that will be executed in parallel. In some embodiments, exec-size field 716 is not available for use in the 64-bit compact instruction format 730.
Some graphics core instructions have up to three operands including two source operands, src0 720, src1 722, and one destination 718. In some embodiments, the graphics cores support dual destination instructions, where one of the destinations is implied. Data manipulation instructions can have a third source operand (e.g., SRC2 724), where the instruction opcode 712 determines the number of source operands. An instruction’s last source operand can be an immediate (e.g., hard-coded) value passed with the instruction.
In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726 specifying, for example, whether direct register addressing mode or indirect register addressing mode is used. When direct register addressing mode is used, the register address of one or more operands is directly provided by bits in the instruction.
In some embodiments, the 128-bit instruction format 710 includes an access/address mode field 726, which specifies an address mode and/or an access mode for the instruction. In one embodiment the access mode is used to define a data access alignment for the instruction. Some embodiments support access modes including a 16-byte aligned access mode and a 1-byte aligned access mode, where the byte alignment of the access mode determines the access alignment of the instruction operands. For example, when in a first mode, the instruction may use byte-aligned addressing for source and destination operands and when in a second mode, the instruction may use 16-byte-aligned addressing for all source and destination operands.
In one embodiment, the address mode portion of the access/address mode field 726 determines whether the instruction is to use direct or indirect addressing. When direct register addressing mode is used bits in the instruction directly provide the register address of one or more operands. When indirect register addressing mode is used, the register address of one or more operands may be computed based on an address register value and an address immediate field in the instruction.
In some embodiments instructions are grouped based on opcode 712 bit-fields to simplify Opcode decode 740. For an 8-bit opcode, bits 4, 5, and 6 allow the graphics core to determine the type of opcode. The precise opcode grouping shown is merely an example. In some embodiments, a move and logic opcode group 742 includes data movement and logic instructions (e.g., move (mov), compare (cmp)). In some embodiments, move and logic group 742 shares the five most significant bits (MSB), where move (mov) instructions are in the form of 0000xxxxb and logic instructions are in the form of 0001xxxxb. A flow control instruction group 744 (e.g., call, jump (jmp)) includes instructions in the form of 0010xxxxb (e.g., 0×20). A miscellaneous instruction group 746 includes a mix of instructions, including synchronization instructions (e.g., wait, send) in the form of 0011xxxxb (e.g., 0×30). A parallel math instruction group 748 includes component-wise arithmetic instructions (e.g., add, multiply (mul)) in the form of 0100xxxxb (e.g., 0×40). The parallel math instruction group 748 performs the arithmetic operations in parallel across data channels. The vector math group 750 includes arithmetic instructions (e.g., dp4) in the form of 0101xxxxb (e.g., 0×50). The vector math group performs arithmetic such as dot product calculations on vector operands. The illustrated opcode decode 740, in one embodiment, can be used to determine which portion of a graphics core will be used to execute a decoded instruction. For example, some instructions may be designated as systolic instructions that will be performed by a systolic array. Other instructions, such as ray-tracing instructions (not shown) can be routed to a ray-tracing core or ray-tracing logic within a slice or partition of execution logic.
In some embodiments, graphics processor 800 includes a geometry pipeline 820, a media pipeline 830, a display engine 840, thread execution logic 850, and a render output pipeline 870. In some embodiments, graphics processor 800 is a graphics processor within a multi-core processing system that includes one or more general-purpose processing cores. The graphics processor is controlled by register writes to one or more control registers (not shown) or via commands issued to graphics processor 800 via a ring interconnect 802. In some embodiments, ring interconnect 802 couples graphics processor 800 to other processing components, such as other graphics processors or general-purpose processors. Commands from ring interconnect 802 are interpreted by a command streamer 803, which supplies instructions to individual components of the geometry pipeline 820 or the media pipeline 830.
In some embodiments, command streamer 803 directs the operation of a vertex fetcher 805 that reads vertex data from memory and executes vertex-processing commands provided by command streamer 803. In some embodiments, vertex fetcher 805 provides vertex data to a vertex shader 807, which performs coordinate space transformation and lighting operations to each vertex. In some embodiments, vertex fetcher 805 and vertex shader 807 execute vertex-processing instructions by dispatching execution threads to graphics cores 852A-852B via a thread dispatcher 831.
In some embodiments, graphics cores 852A-852B are an array of vector processors having an instruction set for performing graphics and media operations. In some embodiments, graphics cores 852A-852B have an attached L1 cache 851 that is specific for each array or shared between the arrays. The cache can be configured as a data cache, an instruction cache, or a single cache that is partitioned to contain data and instructions in different partitions.
In some embodiments, geometry pipeline 820 includes tessellation components to perform hardware-accelerated tessellation of 3D objects. In some embodiments, a programmable hull shader 811 configures the tessellation operations. A programmable domain shader 817 provides back-end evaluation of tessellation output. A tessellator 813 operates at the direction of hull shader 811 and contains special purpose logic to generate a set of detailed geometric objects based on a coarse geometric model that is provided as input to geometry pipeline 820. In some embodiments, if tessellation is not used, tessellation components (e.g., hull shader 811, tessellator 813, and domain shader 817) can be bypassed. The tessellation components can operate based on data received from the vertex shader 807.
In some embodiments, complete geometric objects can be processed by a geometry shader 819 via one or more threads dispatched to graphics cores 852A-852B or can proceed directly to the clipper 829. In some embodiments, the geometry shader operates on entire geometric objects, rather than vertices or patches of vertices as in previous stages of the graphics pipeline. If the tessellation is disabled the geometry shader 819 receives input from the vertex shader 807. In some embodiments, geometry shader 819 is programmable by a geometry shader program to perform geometry tessellation if the tessellation units are disabled.
Before rasterization, a clipper 829 processes vertex data. The clipper 829 may be a fixed function clipper or a programmable clipper having clipping and geometry shader functions. In some embodiments, a rasterizer and depth test component 873 in the render output pipeline 870 dispatches pixel shaders to convert the geometric objects into per pixel representations. In some embodiments, pixel shader logic is included in thread execution logic 850. In some embodiments, an application can bypass the rasterizer and depth test component 873 and access un-rasterized vertex data via a stream out unit 823.
The graphics processor 800 has an interconnect bus, interconnect fabric, or some other interconnect mechanism that allows data and message passing amongst the major components of the processor. In some embodiments, graphics cores 852A-852B and associated logic units (e.g., L1 cache 851, sampler 854, texture cache 858, etc.) interconnect via a data port 856 to perform memory access and communicate with render output pipeline components of the processor. In some embodiments, sampler 854, caches 851, 858 and graphics cores 852A-852B each have separate memory access paths. In one embodiment the texture cache 858 can also be configured as a sampler cache.
In some embodiments, render output pipeline 870 contains a rasterizer and depth test component 873 that converts vertex-based objects into an associated pixel-based representation. In some embodiments, the rasterizer logic includes a windower/masker unit to perform fixed function triangle and line rasterization. An associated render cache 878 and depth cache 879 are also available in some embodiments. A pixel operations component 877 performs pixel-based operations on the data, though in some instances, pixel operations associated with 2D operations (e.g., bit block image transfers with blending) are performed by the 2D engine 841, or substituted at display time by the display controller 843 using overlay display planes. In some embodiments, a shared L3 cache 875 is available to all graphics components, allowing the sharing of data without the use of main system memory.
In some embodiments, media pipeline 830 includes a media engine 837 and a video front-end 834. In some embodiments, video front-end 834 receives pipeline commands from the command streamer 803. In some embodiments, media pipeline 830 includes a separate command streamer. In some embodiments, video front-end 834 processes media commands before sending the command to the media engine 837. In some embodiments, media engine 837 includes thread spawning functionality to spawn threads for dispatch to thread execution logic 850 via thread dispatcher 831.
In some embodiments, graphics processor 800 includes a display engine 840. In some embodiments, display engine 840 is external to processor 800 and couples with the graphics processor via the ring interconnect 802, or some other interconnect bus or fabric. In some embodiments, display engine 840 includes a 2D engine 841 and a display controller 843. In some embodiments, display engine 840 contains special purpose logic capable of operating independently of the 3D pipeline. In some embodiments, display controller 843 couples with a display device (not shown), which may be a system integrated display device, as in a laptop computer, or an external display device attached via a display device connector.
In some embodiments, the geometry pipeline 820 and media pipeline 830 are configurable to perform operations based on multiple graphics and media programming interfaces and are not specific to any one application programming interface (API). In some embodiments, driver software for the graphics processor translates API calls that are specific to a particular graphics or media library into commands that can be processed by the graphics processor. In some embodiments, support is provided for the Open Graphics Library (OpenGL), Open Computing Language (OpenCL), and/or Vulkan graphics and compute API, all from the Khronos Group. In some embodiments, support may also be provided for the Direct3D library from the Microsoft Corporation. In some embodiments, a combination of these libraries may be supported. Support may also be provided for the Open Source Computer Vision Library (OpenCV). A future API with a compatible 3D pipeline would also be supported if a mapping can be made from the pipeline of the future API to the pipeline of the graphics processor.
In some embodiments, client 902 specifies the client unit of the graphics device that processes the command data. In some embodiments, a graphics processor command parser examines the client field of each command to condition the further processing of the command and route the command data to the appropriate client unit. In some embodiments, the graphics processor client units include a memory interface unit, a render unit, a 2D unit, a 3D unit, and a media unit. Each client unit has a corresponding processing pipeline that processes the commands. Once the command is received by the client unit, the client unit reads the opcode 904 and, if present, sub-opcode 905 to determine the operation to perform. The client unit performs the command using information in data field 906. For some commands an explicit command size 908 is expected to specify the size of the command. In some embodiments, the command parser automatically determines the size of at least some of the commands based on the command opcode. In some embodiments commands are aligned via multiples of a double word. Other command formats can be used.
The flow diagram in
In some embodiments, the graphics processor command sequence 910 may begin with a pipeline flush command 912 to cause any active graphics pipeline to complete the currently pending commands for the pipeline. In some embodiments, the 3D pipeline 922 and the media pipeline 924 do not operate concurrently. The pipeline flush is performed to cause the active graphics pipeline to complete any pending commands. In response to a pipeline flush, the command parser for the graphics processor will pause command processing until the active drawing engines complete pending operations and the relevant read caches are invalidated. Optionally, any data in the render cache that is marked ‘dirty’ can be flushed to memory. In some embodiments, pipeline flush command 912 can be used for pipeline synchronization or before placing the graphics processor into a low power state.
In some embodiments, a pipeline select command 913 is used when a command sequence requires the graphics processor to explicitly switch between pipelines. In some embodiments, a pipeline select command 913 is required only once within an execution context before issuing pipeline commands unless the context is to issue commands for both pipelines. In some embodiments, a pipeline flush command 912 is required immediately before a pipeline switch via the pipeline select command 913.
In some embodiments, a pipeline control command 914 configures a graphics pipeline for operation and is used to program the 3D pipeline 922 and the media pipeline 924. In some embodiments, pipeline control command 914 configures the pipeline state for the active pipeline. In one embodiment, the pipeline control command 914 is used for pipeline synchronization and to clear data from one or more cache memories within the active pipeline before processing a batch of commands.
In some embodiments, commands related to the return buffer state 916 are used to configure a set of return buffers for the respective pipelines to write data. Some pipeline operations require the allocation, selection, or configuration of one or more return buffers into which the operations write intermediate data during processing. In some embodiments, the graphics processor also uses one or more return buffers to store output data and to perform cross thread communication. In some embodiments, the return buffer state 916 includes selecting the size and number of return buffers to use for a set of pipeline operations.
The remaining commands in the command sequence differ based on the active pipeline for operations. Based on a pipeline determination 920, the command sequence is tailored to the 3D pipeline 922 beginning with the 3D pipeline state 930 or the media pipeline 924 beginning at the media pipeline state 940.
The commands to configure the 3D pipeline state 930 include 3D state setting commands for vertex buffer state, vertex element state, constant color state, depth buffer state, and other state variables that are to be configured before 3D primitive commands are processed. The values of these commands are determined at least in part based on the particular 3D API in use. In some embodiments, 3D pipeline state 930 commands are also able to selectively disable or bypass certain pipeline elements if those elements will not be used.
In some embodiments, 3D primitive 932 command is used to submit 3D primitives to be processed by the 3D pipeline. Commands and associated parameters that are passed to the graphics processor via the 3D primitive 932 command are forwarded to the vertex fetch function in the graphics pipeline. The vertex fetch function uses the 3D primitive 932 command data to generate vertex data structures. The vertex data structures are stored in one or more return buffers. In some embodiments, 3D primitive 932 command is used to perform vertex operations on 3D primitives via vertex shaders. To process vertex shaders, 3D pipeline 922 dispatches shader programs to the graphics cores.
In some embodiments, 3D pipeline 922 is triggered via an execute 934 command or event. In some embodiments, a register write triggers command execution. In some embodiments execution is triggered via a ‘go’ or ‘kick’ command in the command sequence. In one embodiment, command execution is triggered using a pipeline synchronization command to flush the command sequence through the graphics pipeline. The 3D pipeline will perform geometry processing for the 3D primitives. Once operations are complete, the resulting geometric objects are rasterized and the pixel engine colors the resulting pixels. Additional commands to control pixel shading and pixel back-end operations may also be included for those operations.
In some embodiments, the graphics processor command sequence 910 follows the media pipeline 924 path when performing media operations. In general, the specific use and manner of programming for the media pipeline 924 depends on the media or compute operations to be performed. Specific media decode operations may be offloaded to the media pipeline during media decode. In some embodiments, the media pipeline can also be bypassed and media decode can be performed in whole or in part using resources provided by one or more general-purpose processing cores. In one embodiment, the media pipeline also includes elements for general-purpose graphics processor unit (GPGPU) operations, where the graphics processor is used to perform SIMD vector operations using computational shader programs that are not explicitly related to the rendering of graphics primitives.
In some embodiments, media pipeline 924 is configured in a similar manner as the 3D pipeline 922. A set of commands to configure the media pipeline state 940 are dispatched or placed into a command queue before the media object commands 942. In some embodiments, commands for the media pipeline state 940 include data to configure the media pipeline elements that will be used to process the media objects. This includes data to configure the video decode and video encode logic within the media pipeline, such as encode or decode format. In some embodiments, commands for the media pipeline state 940 also support the use of one or more pointers to “indirect” state elements that contain a batch of state settings.
In some embodiments, media object commands 942 supply pointers to media objects for processing by the media pipeline. The media objects include memory buffers containing video data to be processed. In some embodiments, all media pipeline states must be valid before issuing a media object command 942. Once the pipeline state is configured and media object commands 942 are queued, the media pipeline 924 is triggered via an execute command 944 or an equivalent execute event (e.g., register write). Output from media pipeline 924 may then be post processed by operations provided by the 3D pipeline 922 or the media pipeline 924. In some embodiments, GPGPU operations are configured and executed in a similar manner as media operations.
In some embodiments, 3D graphics application 1010 contains one or more shader programs including shader instructions 1012. The shader language instructions may be in a high-level shader language, such as the High-Level Shader Language (HLSL) of Direct3D, the OpenGL Shader Language (GLSL), and so forth. The application also includes executable instructions 1014 in a machine language suitable for execution by the general-purpose processor core 1034. The application also includes graphics objects 1016 defined by vertex data.
In some embodiments, operating system 1020 is a Microsoft® Windows® operating system from the Microsoft Corporation, a proprietary UNIX-like operating system, or an open source UNIX-like operating system using a variant of the Linux kernel. The operating system 1020 can support a graphics API 1022 such as the Direct3D API, the OpenGL API, or the Vulkan API. When the Direct3D API is in use, the operating system 1020 uses a front-end shader compiler 1024 to compile any shader instructions 1012 in HLSL into a lower-level shader language. The compilation may be a just-in-time (JIT) compilation or the application can perform shader pre-compilation. In some embodiments, high-level shaders are compiled into low-level shaders during the compilation of the 3D graphics application 1010. In some embodiments, the shader instructions 1012 are provided in an intermediate form, such as a version of the Standard Portable Intermediate Representation (SPIR) used by the Vulkan API.
In some embodiments, user mode graphics driver 1026 contains a back-end shader compiler 1027 to convert the shader instructions 1012 into a hardware specific representation. When the OpenGL API is in use, shader instructions 1012 in the GLSL high-level language are passed to a user mode graphics driver 1026 for compilation. In some embodiments, user mode graphics driver 1026 uses operating system kernel mode functions 1028 to communicate with a kernel mode graphics driver 1029. In some embodiments, kernel mode graphics driver 1029 communicates with graphics processor 1032 to dispatch commands and instructions.
One or more aspects of at least one embodiment may be implemented by representative code stored on a machine-readable medium which represents and/or defines logic within an integrated circuit such as a processor. For example, the machine-readable medium may include instructions which represent various logic within the processor. When read by a machine, the instructions may cause the machine to fabricate the logic to perform the techniques described herein. Such representations, known as “IP cores,” are reusable units of logic for an integrated circuit that may be stored on a tangible, machine-readable medium as a hardware model that describes the structure of the integrated circuit. The hardware model may be supplied to various customers or manufacturing facilities, which load the hardware model on fabrication machines that manufacture the integrated circuit. The integrated circuit may be fabricated such that the circuit performs operations described in association with any of the embodiments described herein.
The RTL design 1115 or equivalent may be further synthesized by the design facility into a hardware model 1120, which may be in a hardware description language (HDL), or some other representation of physical design data. The HDL may be further simulated or tested to verify the IP core design. The IP core design can be stored for delivery to a 3rd party fabrication facility 1165 using non-volatile memory 1140 (e.g., hard disk, flash memory, or any non-volatile storage medium). Alternatively, the IP core design may be transmitted (e.g., via the Internet) over a wired connection 1150 or wireless connection 1160. The fabrication facility 1165 may then fabricate an integrated circuit that is based at least in part on the IP core design. The fabricated integrated circuit can be configured to perform operations in accordance with at least one embodiment described herein.
In some embodiments, the units of logic 1172, 1174 are electrically coupled with a bridge 1182 that is configured to route electrical signals between the logic 1172, 1174. The bridge 1182 may be a dense interconnect structure that provides a route for electrical signals. The bridge 1182 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic 1172, 1174.
Although two units of logic 1172, 1174 and a bridge 1182 are illustrated, embodiments described herein may include more or fewer logic units on one or more dies. The one or more dies may be connected by zero or more bridges, as the bridge 1182 may be excluded when the logic is included on a single die. Alternatively, multiple dies or units of logic can be connected by one or more bridges. Additionally, multiple logic units, dies, and bridges can be connected together in other possible configurations, including three-dimensional configurations.
In various embodiments a package assembly 1190 can include components and chiplets that are interconnected by a fabric 1185 and/or one or more bridges 1187. The chiplets within the package assembly 1190 may have a 2.5D arrangement using Chip-on-Wafer-on-Substrate stacking in which multiple dies are stacked side-by-side on a silicon interposer 1189 that couples the chiplets with the substrate 1180. The substrate 1180 includes electrical connections to the package interconnect 1183. In one embodiment the silicon interposer 1189 is a passive interposer that includes through-silicon vias (TSVs) to electrically couple chiplets within the package assembly 1190 to the substrate 1180. In one embodiment, silicon interposer 1189 is an active interposer that includes embedded logic in addition to TSVs. In such embodiment, the chiplets within the package assembly 1190 are arranged using 3D face to face die stacking on top of the active interposer 1189. The active interposer 1189 can include hardware logic for I/O 1191, cache memory 1192, and other hardware logic 1193, in addition to interconnect fabric 1185 and a silicon bridge 1187. The fabric 1185 enables communication between the various logic chiplets 1172, 1174 and the logic 1191, 1193 within the active interposer 1189. The fabric 1185 may be an NoC interconnect or another form of packet switched fabric that switches data packets between components of the package assembly. For complex assemblies, the fabric 1185 may be a dedicated chiplet enables communication between the various hardware logic of the package assembly 1190.
Bridge structures 1187 within the active interposer 1189 may be used to facilitate a point-to-point interconnect between, for example, logic or I/O chiplets 1174 and memory chiplets 1175. In some implementations, bridge structures 1187 may also be embedded within the substrate 1180. The hardware logic chiplets can include special purpose hardware logic chiplets 1172, logic or I/O chiplets 1174, and/or memory chiplets 1175. The hardware logic chiplets 1172 and logic or I/O chiplets 1174 may be implemented at least partly in configurable logic or fixed-functionality logic hardware and can include one or more portions of any of the processor core(s), graphics processor(s), parallel processors, or other accelerator devices described herein. The memory chiplets 1175 can be DRAM (e.g., GDDR, HBM) memory or cache (SRAM) memory. Cache memory 1192 within the active interposer 1189 (or substrate 1180) can act as a global cache for the package assembly 1190, part of a distributed global cache, or as a dedicated cache for the fabric 1185.
Each chiplet can be fabricated as separate semiconductor die and coupled with a base die that is embedded within or coupled with the substrate 1180. The coupling with the substrate 1180 can be performed via an interconnect structure 1173. The interconnect structure 1173 may be configured to route electrical signals between the various chiplets and logic within the substrate 1180. The interconnect structure 1173 can include interconnects such as, but not limited to bumps or pillars. In some embodiments, the interconnect structure 1173 may be configured to route electrical signals such as, for example, input/output (I/O) signals and/or power or ground signals associated with the operation of the logic, I/O, and memory chiplets. In one embodiment, an additional interconnect structure couples the active interposer 1189 with the substrate 1180.
In some embodiments, the substrate 1180 is an epoxy-based laminate substrate. The substrate 1180 may include other suitable types of substrates in other embodiments. The package assembly 1190 can be connected to other electrical devices via a package interconnect 1183. The package interconnect 1183 may be coupled to a surface of the substrate 1180 to route electrical signals to other electrical devices, such as a motherboard, other chipset, or multi-chip module.
In some embodiments, a logic or I/O chiplet 1174 and a memory chiplet 1175 can be electrically coupled via a bridge 1187 that is configured to route electrical signals between the logic or I/O chiplet 1174 and a memory chiplet 1175. The bridge 1187 may be a dense interconnect structure that provides a route for electrical signals. The bridge 1187 may include a bridge substrate composed of glass or a suitable semiconductor material. Electrical routing features can be formed on the bridge substrate to provide a chip-to-chip connection between the logic or I/O chiplet 1174 and a memory chiplet 1175. The bridge 1187 may also be referred to as a silicon bridge or an interconnect bridge. For example, the bridge 1187, in some embodiments, is an Embedded Multi-die Interconnect Bridge (EMIB). In some embodiments, the bridge 1187 may simply be a direct connection from one chiplet to another chiplet.
In one embodiment, SRAM and power delivery circuits can be fabricated into one or more of the base chiplets 1196, 1198, which can be fabricated using a different process technology relative to the interchangeable chiplets 1195 that are stacked on top of the base chiplets. For example, the base chiplets 1196, 1198 can be fabricated using a larger process technology, while the interchangeable chiplets can be manufactured using a smaller process technology. One or more of the interchangeable chiplets 1195 may be memory (e.g., DRAM) chiplets. Different memory densities can be selected for the package assembly 1194 based on the power, and/or performance targeted for the product that uses the package assembly 1194. Additionally, logic chiplets with a different number of type of functional units can be selected at time of assembly based on the power, and/or performance targeted for the product. Additionally, chiplets containing IP logic cores of differing types can be inserted into the interchangeable chiplet slots, enabling hybrid processor designs that can mix and match different technology IP blocks.
As shown in
Graphics processor 1310 additionally includes one or more memory management units (MMUs) 1320A-1320B, cache(s) 1325A-1325B, and circuit interconnect(s) 1330A-1330B. The one or more MMU(s) 1320A-1320B provide for virtual to physical address mapping for the graphics processor 1310, including for the vertex processor 1305 and/or fragment processor(s) 1315A-1315N, which may reference vertex or image/texture data stored in memory, in addition to vertex or image/texture data stored in the one or more cache(s) 1325A-1325B. In one embodiment the one or more MMU(s) 1320A-1320B may be synchronized with other MMUs within the system, including one or more MMUs associated with the one or more application processor(s) 1205, image processor 1215, and/or video processor 1220 of
As shown
The SIMT processor includes an instruction unit 1402. The instruction unit is sometimes also called a front-end unit. The instruction unit or front-end unit may operate as a control plane for the SIMT processor and may be operative to receive and process instructions and use them to control a single-instruction, multiple-thread (SIMT) processor 1403. By way of example, the instruction unit may determine a next instruction to perform, fetch the instruction, decode the instruction, schedule the instruction (e.g., track the number of cycles per instruction and control the wavefront accessed), generate the thread read and write addresses, and so on.
The SIMT processor 1403 represents a data processing portion of the processor 1401. The SIMT processor is able to perform instructions in SIMT fashion. The SIMT processor includes multiple processing elements 1404. The processing elements may represent hardware elements, hardware units, or circuitry. Examples of suitable processing elements include, but are not limited to, arithmetic and logical units (ALUs), floating-point ALUs, floating-point units, integer units, tensor units, ray tracing cores, texture units, and the like, and various combinations thereof. Certain GPUs available from Nvidia Corporation of Santa Clara, California, United States refer to the SIMT processor as a streaming multiprocessor (SM) and refer to the processing elements as either streaming processors (SPs) or cores (e.g., Compute Unified Device Architecture (CUDA) cores). Certain GPUs available from Advanced Micro Devices (AMD), Inc. of Santa Clara, California, United States refer to the SIMT processor as a compute unit. Commonly, there may be many processing elements (e.g., the processor 1401 may have from hundreds to many thousands of processing elements), although the scope of the invention is not limited to any known number and is not limited to any known way of apportioning the processing elements among potentially multiple SIMT processors.
For the SIMT processor, work groups may be broken down into hardware schedulable groups of threads for the processing elements 1404 (e.g., stream processors (SP), CUDA cores, etc.). These hardware schedulable groups may also be called wavefronts, warps, or parallel thread groups (e.g., groups of threads to run or execute in parallel). By way of example, a wavefront or warp may include 8, 16, 32, or some other number of SPs each to perform a corresponding thread. The number of such threads or SPs represents the width of the wavefront or warp. Conventionally, these threads or SPs in the wavefront or warp may perform the same instruction concurrently (e.g., during the same clock cycle). There may also be one or more than one wavefront. For example, there may be 8, 16, 32, 64, or some other number of wavefronts. Conventionally, these wavefronts may perform instructions sequentially (e.g., on sequential clock cycles).
In some embodiments, the processor 1401 may optionally be able to perform operations corresponding to an embodiment of a variable wavefront SIMT instruction 1407 (e.g., as an embodiment of a variable thread SIMT instruction). In some embodiments, the variable wavefront SIMT instruction may have a variable, flexible, or specifiable value to indicate a number of one or more threads corresponding to the width of a wavefront. In some embodiments, the variable wavefront SIMT instruction may have a variable, flexible, or specifiable value to indicate a number of wavefronts. The variable wavefront SIMT instruction may be operative to control the processor 1401 and/or the SIMT processor 1403 to perform operations based on and/or according to and/or consistent with either one or both of these values. For example, it may cause the processor or SIMT processor to use either one or both of a variable, flexible, or specifiable wavefront width and/or number of wavefronts. Examples of suitable variable wavefront SIMT instructions are discussed further below.
In some embodiments, the variable wavefront SIMT instruction or other variable thread SIMT instruction may be able to specify a number of threads that is less than the number of threads configured and/or initialized. For a hard GPU, the number of threads configured is used herein to refer to the number of threads built into the hardware of the hard GPU (e.g., fixed during manufacture). In contrast, the number of threads initialized is used herein to refer to the number of threads initialized (e.g., as specified by a CUDA initialization instruction, an accelerator offload instruction, an instruction used to issue a program to a GPU, or a call from a host to the GPU to run code, etc.) to be used to run a program, routine, or set of instructions. Such an initialization instruction does not initialize threads on an instruction-by-instruction basis and the initialization instruction is not a data processing instruction (e.g., an arithmetic and/or logical instruction that performs operations on data and generates a result). Conventionally, all the threads configured and/or initialized on a hard GPU would run and execute. In contrast, in some embodiments, the variable thread SIMT instructions disclosed herein allow only a subset of the threads configured and/or only a subset of the threads initialized to run or execute. Also, in some embodiments, the variable thread SIMT instructions disclosed herein may allow these subsets to be dynamically varied on an instruction-by-instruction basis. Also, in some embodiments, the variable thread SIMT instructions disclosed herein that allow these subsets to be specified or dynamically varied may also be arithmetic and/or logical and/or other data processing instructions that perform arithmetic and/or logical and/or other data processing operations on data and generate results (e.g., that are stored in registers).
In some embodiments, as will be discussed further below, the processor 1401 may optionally be able to perform operations corresponding to an embodiment of an inter-wavefront register access SIMT instruction 1408. The inter-wavefront register access SIMT instruction may have at least one field to provide a source thread identifier and at least one field to provide a source register identifier. The inter-wavefront register access SIMT instruction may be operative to control the processor 1401 and/or the SIMT processor 1403 to perform operations based on and/or according to and/or consistent with the source thread identifier and the source register identifier. For example, it may cause a processing element of the SIMT processor is to execute the inter-wavefront register access SIMT instruction for a first thread to receive data from a register that is to be identified by the source register identifier of a second, different thread that is to be identified by the source thread identifier. Examples of suitable inter-wavefront register access SIMT instructions are discussed further below.
In some embodiments, as will be discussed further below, the processor 1401 may optionally be able to perform operations corresponding to an embodiment of one or more low overhead loop instructions 1409. Examples of suitable overhead loop instruction(s) are discussed further below.
In some embodiments, as will be discussed further below, the processor 1401 may optionally be able to perform operations corresponding to an embodiment of a dot product SIMT instruction 1410. Examples of suitable dot product SIMT instructions are discussed further below. In some embodiments, the processor 1401 may optionally include a dot product unit 1406 to perform the dot product SIMT instruction. In some embodiments, the dot product unit may be accessible to and/or capable of being used by the processing elements 1404.
In some embodiments, the processor may optionally include a shared memory 1405. The shared memory may be loaded by a host or external agent. The shared memory may be written to directly by instructions. In the illustrated embodiment, the shared memory is shown as being part of the SIMT processor. In other embodiments, the shared memory may be separate from the SIMT processor but coupled with it.
The processor 1501 may receive the variable wavefront SIMT instruction 1507, such as, for example, from a cache (e.g., a system cache, a shared cache, or a level two (L2) cache, etc.) or memory 1599. In some embodiments, the variable wavefront SIMT instruction may be a low-level instruction or control signal (e.g., binary microcode, a machine-level instruction, a binary instruction, etc.) that the processor is natively able to execute. In other embodiments, the processor may have circuitry or other logic (e.g., instruction translation or conversion logic) to translate or convert the variable wavefront SIMT instruction into one or more other instructions that the processor is natively able to execute.
The variable wavefront SIMT instruction has an opcode or operation code 1512 (e.g., values in one or more fields). The opcode at least partially specifies the operation(s) that the variable wavefront SIMT instruction is to cause the processor 1501 to perform. Aside from the variable wavefront aspect, the variable wavefront SIMT instruction may be agnostic to the type of operation to be performed. Examples of suitable types of operations that may be performed include, but are not limited to, multiplication, addition, multiplication-addition, matrix multiplication, floating-point format conversion, absolute value, negation, as well as various other types of arithmetic and/or logical operations.
The variable wavefront SIMT instruction may also have one or more fields (not shown) to specify one or more source registers or other storage locations from where data to be operated on is to be received and one or more destination registers or other storage locations where one or more results are to be stored. For example, the variable wavefront SIMT instruction may have a first source register identifier field to identify a first source register as a source of a first source data, a second source register identifier field to identify a second source register as a source of a second source data, and a destination register identifier field to identify a destination register where a result data is to be stored. Suitable widths for the registers and data include, but are not limited to, 8-bits, 16-bits, 32-bits, and 64-bits. The variable wavefront SIMT instruction may optionally have attributes of the other instruction formats discussed elsewhere herein (e.g., for
In some embodiments, the variable wavefront SIMT instruction may have at least one field to provide a value indicative of a number of one or more threads 1513. The number of the one or more threads may correspond to the width of a wavefront to be used for the SIMT instruction. The value indicative of a number of one or more threads may represent a width of wavefront(s) specifier. For the processor 1501, work groups may be broken down into hardware schedulable groups of threads for processing elements PE1 through PEn (e.g., stream processors (SP), CUDA cores, etc.) to perform. These hardware schedulable groups may also be referred to as wavefronts or warps. By way of example, a wavefront or warp may include 8, 16, 32, or some other number of threads or SPs each to perform a corresponding one of the threads. The number of such threads or SPs is referred to herein as the width of the wavefront or warp. Conventionally, all the threads or SPs in the full width of the wavefront or warp may perform the same instruction concurrently (e.g., during the same clock cycle). The value indicative of a number of one or more threads 1513 may specify or indicate the width of one or more wavefronts, warps, or schedulable groups of threads (e.g., a number of threads and/or a number of SPs or other processing elements) to be used for the variable wavefront SIMT instruction. The variable wavefront SIMT instruction may allow the width to be indicated to be less than the full or maximum possible width of the wavefront or warp (e.g., only a fraction of the full or maximum possible width). Conventionally, SIMT instructions do not have the value indicative of a number of one or more threads 1513, but rather use the full/maximum possible width as a fixed or static width for the wavefront or warp.
The number of one or more threads or wavefront width may be specified in different ways in different embodiments. As one example, a single bit may be able to have two different values to specify either one of two different numbers of threads or wavefront widths (e.g., the full/maximum possible width of the wavefront or only half the full/maximum possible width of the wavefront). As another example, two bits may be able to have four different values to specify any one of four different numbers of threads or wavefront widths (e.g., the full/maximum possible width of the wavefront, only half the full/maximum possible width of the wavefront, only one quarter the full/maximum possible width of the wavefront, or only a width of one thread or one processing element). Alternatively, different fractions of the full/maximum possible width may be used, such as, for example, one third, one eighth, and so on. In other examples, three or more bits may optionally be used to specify even more different numbers of threads or wavefront widths. If desired, enough bits may optionally be included to specify any integer up to the full/maximum possible width of the wavefront.
In some embodiments, the variable wavefront SIMT instruction may optionally have at least one field to provide a value to indicate a number of wavefronts, warps, or schedulable groups of threads 1514 to be used for the SIMT instruction. In the illustrated variable wavefront SIMT instruction the value to indicate a number of threads 1513 is included and the inclusion of the value to indicate a number of wavefronts 1514 is optional. In an alternate embodiment, a variable wavefront SIMT instruction may include a value to indicate a number of wavefronts 1514 and optionally include or optionally not include a value to indicate a number of threads 1513. The number of wavefronts may be specified or otherwise indicated in different ways in different embodiments. As one example, a single bit may be able to have two different values to specify either one of two different number of wavefronts (e.g., the maximum number of wavefronts available or only half the maximum number of wavefronts available). As another example, two bits may be able to have four different values to specify any one of four different number of wavefronts (e.g., the maximum number of wavefronts available, only half the maximum number of wavefronts available, only one quarter the maximum number of wavefronts available, or only one single wavefront). Alternatively, different fractions of the available wavefronts may be used, such as, for example, one third, one eighth, and so on. In other examples, three or more bits may optionally be used to specify even more different numbers of wavefronts. If desired, enough bits may optionally be included to specify any integer number from one up to the maximum number of wavefronts available. Conventionally, SIMT instructions do not have the value to indicate the number of wavefronts 1514, but rather use the full/maximum possible number of wavefronts/warps, or a lesser number if less than all threads have been initialized when starting a program, as a fixed or static number. The value to indicate the number of wavefronts may also be regarded as a value to indicate a subset or all of a number of clock cycles that an instruction has been configured and/or initialized to run for.
Referring again to
In some embodiments, the decode unit and/or the instruction unit 1502 may decode or otherwise interpret the value to indicate the number of threads 1513 and the optional value to indicate the number wavefront(s) 1514 (e.g., when it is optionally included). In some embodiments, the instruction scheduler unit and/or the instruction unit 1502 may schedule the variable wavefront SIMT instruction on one or more threads and/or processing elements (e.g., SPs) and/or in some cases on only part of the full or maximum possible width of a wavefront/warp according to and/or based on and/or consistent with the value to indicate the number of threads 1513. Likewise, in some embodiments, the instruction scheduler unit and/or the instruction unit 1502 may schedule the variable wavefront SIMT instruction on one or more wavefronts or warps and/or in some cases on less than the maximum number of wavefronts available according to and/or based on and/or consistent with the optional value to indicate the number wavefront(s) 1514. Similarly, the instruction dispatch unit and/or the instruction unit 1502 may dispatch the variable wavefront SIMT instruction according to and/or based on and/or consistent with the value to indicate the number of threads 1513 and the optional value to indicate the number wavefront(s) 1514.
The SIMT processor 1503 (e.g., a streaming multiprocessor, a compute unit, etc.) includes an array of processing elements (PEs). In the illustration, the SIMT processor includes n PEs, where a first PE (PE1), a second PE (PE2), an x-th PE (PEx), and an nth PE (PEn) are shown. The scope of the invention is not limited to any known number of PEs. A corresponding n threads may run on the n PEs. For example, a first thread (T1) may run on PE1, a second thread (T2) may run on PE2, an x-th thread (Tx) may run on PEx, and an nth thread (Tn) may run on PEn. The n threads and/or n PEs may represent the full width and/or maximum possible width of a first wavefront, warp, or schedulable groups of threads 1515-1. Likewise, a second set of n threads, Tn+1 to T2n, may represent the full width and/or maximum possible width of a second wavefront, warp, or schedulable groups of threads 1515-2. Similarly, an m-th set of n threads, Tm*n+1 to T(m+1)*n, may represent the full width and/or maximum possible width of an m-th wavefront, warp, or schedulable groups of threads 1515-m. The m wavefronts, warps, or schedulable groups of threads may represent maximum number of wavefronts, warps, or schedulable groups of threads available.
In some embodiments, the processor 1501 may select or determine a width of the first wavefront 1515-1 (e.g., a number of the threads T1-Tn or processing elements PE1-PEn) to be used for the variable wavefront SIMD instruction according to and/or based on and/or consistent with the value to indicate the number of threads 1513. For example, this may include selecting only T1, only T1-Tx, all of T1-Tn, or only the first fraction (e.g., quarter, third, half, etc.) of T1-Tn. In some embodiments, the processor 1501 may select or determine a number of wavefronts to be used for the variable wavefront SIMD instruction according to and/or based on and/or consistent with the optional value to indicate the number wavefront(s) 1514. For example, this may include selecting only the first wavefront 1515-1, only the first wavefront 1515-1 and the second wavefront 1515-2, all of the first wavefront 1515-1 through the m-th wavefront 1515-m (e.g., the maximum number of wavefronts), or only the first fraction (e.g., quarter, third, half, etc.) of the m wavefronts. Each of a number of processing elements equal in number to the number of the one or more threads indicated by the value 1513 may execute the SIMT instruction concurrently for a different corresponding one of the number of the one or more threads. In some cases, the number of such processing elements may be only a fraction or subset of all processing elements of the SIMT processor that are able to execute the SIMT instruction concurrently (e.g., those corresponding to the maximum possible wavefront width). In some embodiments, each of these number of processing elements may execute the SIMT instruction concurrently for a different corresponding one of the number of the one or more threads at one or more different times for each of the number of the one or more wavefronts indicated by the value 1514. As one example, only processing elements PE1 and PE2 may execute the variable wavefront SIMT instruction at a first time for threads T1 and T2, and then subsequently at a different time (e.g., in a subsequent clock cycle) only processing elements PE1 and PE2 may execute the variable wavefront SIMT instruction for threads Tn+1 and Tn+2.
Advantageously, the ability to use less than the full/maximum width of a wavefront and/or the ability to use less than all of the available wavefronts may be beneficial when limited portions of data are to be processed (e.g., less data needs to be processed for one part of a program or one instruction of a program than for another) by reducing power consumption and/or reducing heat generation and/or freeing resources for other tasks. Also, different instances of the variable wavefront SIMT instruction may allow the width of the wavefront and/or the number of wavefronts to be changed dynamically during runtime on an instruction-by-instruction basis (e.g., a first variable wavefront SIMT instruction may use a first width, a next sequential variable wavefront SIMT instruction may use a second, different width, and so on). An alternate possible approach could be to diverge execution so that only some of the threads write back via a conditional execution statement (e.g., if the thread identifier is less than a certain thread identifier value, then write back for the thread, otherwise do not). For example, a typical GPU may run all the threads that are currently configured but only write back results for a subset of them as needed (e.g., using thread divergence and predication). However, all the threads still run or execute and therefore consume power, generate heat, and are not available for other tasks. This does not significantly improve performance. In contrast, in some of the embodiments disclosed herein only a subset of the threads currently configured may actually run or execute whereas another subset of the threads currently configured may not even run or execute. This may improve performance by not consuming as much power, not generating as much heat, being potentially available for other tasks, etc. This also can be done without, and does not necessarily require the need for, thread divergence and predication.
The processor includes an instruction unit 1602 and a SIMT processor 1603 (e.g., a streaming multiprocessor or a compute unit) having processing elements including a processing element 1604 (e.g., a streaming processor (SP)). Aside from characteristics pertaining to inter-wavefront register access, instead of, or in addition to, the variable wavefront characteristics already described, unless otherwise specified, the instruction unit 1602, the SIMT processor 1603, and the processing element 1604, may optionally have characteristics the same as or similar to those previously described. To avoid obscuring the description, the different and/or additional characteristics will primarily be described for these components without repeating the characteristics that may optionally be the same as or similar to those already described.
The processing element (e.g., an SP) may have threads spread across multiple wavefronts. For example, if there are m wavefronts, and if each wavefront has a width of n threads, then the processing element may have a thread T1 in a first wavefront, a thread Tn+1 in a second wavefront, a thread T2n+1 in a third wavefront, and so on, up to a thread Tm*n+1 in an m-th wavefront. This is just one example. The number of wavefronts may also optionally be shortened if the variable number of wavefronts aspect is incorporated into this instruction, which to avoid obscuring the description of the inter-wavefront register access aspect is not done in this example but is possible. Each of these threads may have a corresponding set of registers. For example, each of threads T1, Tn+1, T2n+1, and Tm*n+1 may have a corresponding set of registers R1-Ry, where y may be, for example, 8, 16, 32, 64, or some other number. These registers may potentially be registers allocated from a pool rather than dedicated sets of registers. Suitable widths for the registers and data include, but are not limited to, 8-bits, 16-bits, 32-bits, and 64-bits.
The instruction unit 1602 of the processor 1601 may receive the inter-wavefront register access SIMT instruction 1608, such as, for example, from a cache (e.g., a system cache, a shared cache, or a level two (L2) cache, etc.) or memory 1699. In some embodiments, the inter-wavefront register access SIMT instruction may be a low-level instruction or control signal (e.g., binary microcode, a machine-level instruction, a binary instruction, etc.) that the processor is natively able to execute. In other embodiments, the processor may have circuitry or other logic (e.g., instruction translation or conversion logic) to translate or convert the inter-wavefront register access SIMT instruction into one or more other instructions that the processor is natively able to execute.
The inter-wavefront register access SIMT instruction may optionally have attributes of the other instruction formats discussed elsewhere herein (e.g.,
In some embodiments, the inter-wavefront register access SIMT instruction may have an optional inter-wavefront register access enable and/or disable control 1621 (e.g., a value in one or more bits or fields) to either enable or disable the ability to perform inter-wavefront register access. For example, the inter-wavefront register access enable and/or disable control may be a single bit that may have a first value (e.g., be set to a value of binary one) to enable inter-wavefront register access, or a second, different value (e.g., be cleared to a value of binary zero) to disable inter-wavefront register access. Alternatively, if desired, the opcode may always enable inter-wavefront register access.
The inter-wavefront register access SIMT instruction may also have a first source thread identifier 1622 (e.g., a value in one or more bits or fields) and a first source register identifier 1623 (e.g., a value in one or more bits or fields). In some embodiments, the first source thread identifier 1622 may be able to specify any of the threads in any of the wavefronts of the processing element 1604. The first source thread identifier 1622 and the first source register identifier 1623 may together identify a first source register to be used by the inter-wavefront register access SIMD instruction. By way of example, the first source thread identifier 1622 may identify thread T2n+1 and the first source register identifier 1623 may identify register R1 so that together they identify register R1 in thread T2n+1 as a location of a first source data or operand of the instruction. Conventionally, SIMT instructions do not have thread identifiers and typically are not able to access registers in different wavefronts of the same processing element.
In some embodiments, the inter-wavefront register access SIMT instruction may optionally also have a second source thread identifier 1624 (e.g., a value in one or more bits or fields) and a second source register identifier 1625 (e.g., a value in one or more bits or fields). Alternatively, some instructions may only have one source data or operand and may optionally omit the second source thread identifier and the second source register identifier. In some embodiments, the second source thread identifier 1624 may be able to specify any of the threads in any of the wavefronts of the processing element 1604. The second source thread identifier 1624 and the second source register identifier 1625 may together identify a second source register to be used by the inter-wavefront register access SIMD instruction. By way of example, the second source thread identifier 1624 may identify thread Tn+1 and the second source register identifier 1625 may identify register R2 so that together they identify register R2 in thread Tn+1 as a location of a second source data or operand of the instruction.
In some embodiments, the inter-wavefront register access SIMT instruction may optionally have a destination thread identifier 1626 (e.g., a value in one or more bits or fields) and a destination register identifier 1627 (e.g., a value in one or more bits or fields). In some embodiments, the destination thread identifier 1626 may be able to specify any of the threads in any of the wavefronts of the processing element 1604. The destination thread identifier 1626 and the destination register identifier 1627 may together identify a destination register to be used by the inter-wavefront register access SIMD instruction. By way of example, the destination thread identifier 1626 may identify thread Tm*n+1 and the destination register identifier 1627 may identify register R3 so that together they identify register R3 in thread Tm*n+1 as a location where a result data or operand of the instruction is to be stored. Alternatively, in other embodiments, the destination thread identifier 1626 may optionally be omitted and it may be implicit that the destination register identifier 1627 identifies a register of the currently active thread for the currently active wavefront for that processing element (e.g., if thread T1 is currently executing the inter-wavefront register access SIMT instruction then it may be implicit that the destination register identifier 1627 identifies one of the registers of thread T1). In one example embodiment, the enable/disable control 1621 and the identifiers 1622, 1624, 1626 may optionally be included in an immediate (e.g., a 16-bit immediate), although this is not required.
The SIMT processor 1603 and/or the processing element 1604 may perform operations corresponding to the inter-wavefront register access SIMT instruction for a first thread. One such operation may be accessing a register or receiving data from a register (e.g., register R1 in thread T2n+1 in this example), indicated by the first source thread identifier 1622 and the first source register identifier 1623, to obtain a first source data or operand. In some cases, the register may optionally be for a different thread than the first thread performing the inter-wavefront register access SIMT instruction. Another operation may be accessing a register or receiving data from a register (e.g., register R2 in thread Tn+1 in this example), indicated by the second source thread identifier 1624 and the second source register identifier 1625, to obtain a second source data or operand. In some cases, the register may optionally be for a different thread than the first thread performing the inter-wavefront register access SIMT instruction. An operation (e.g., at least partly specified by the opcode 1620) may be performed on the first and second source data or operands to generate a result data or operand. By way of example, in the case of an add instruction the first source data may be added to the second source data to generate a sum as the result data. Another operation may be to store the result data in a register (e.g., register R3 in thread Tm*n+1 in this example), indicated by the optional destination thread identifier 1626 and the destination register identifier 1627. Alternatively, the optional destination thread identifier 1626 may optionally be omitted, and it may be implicit that the register indicated by the destination register identifier 1627 is in the current thread of the current wavefront.
At least conceptually, the variable wavefront SIMT instructions and the inter-wavefront register access SIMT instructions can be used to control the SIMT processor or GPU to have certain characteristics of or even operate like or emulate the processing characteristics of different types of non-SIMT processors. The inter-wavefront register access SIMT instructions can be used to control the processor to operate like or emulate a vector processor where each SP or other processing element corresponds to a vector processor lane. The variable wavefront capability can be used to control the processor to only use the minimum number of cycles needed to execute an instruction. Multi-threaded CPUs, or a simple MCU are also possible. Because this is controlled by the instructions the processor may be able to switch between these different architectural styles on an instruction-by-instruction basis with no or very little deadtime.
The processor includes an instruction unit 1702 to receive the dot product SIMT instruction 1710 (e.g., from a cache (e.g., a system cache, a shared cache, or a level two (L2) cache, etc.) or memory 1799) and a SIMT processor 1703 (e.g., a streaming multiprocessor or a compute unit) having processing elements PE1 through PEn. Aside from characteristics pertaining to dot product, instead of the variable wavefront characteristics already described, unless otherwise specified, the instruction unit 1701, the SIMT processor 1703, and the processing elements PE1 through PEn, may optionally have characteristics the same as or similar to those previously described. To avoid obscuring the description, the different and/or additional characteristics will primarily be described for these components without repeating the characteristics that may optionally be the same as or similar to those already described.
The processing elements PE1 through PEn may execute respective threads T1 through Tn of a first wavefront 1715, as well as other wavefronts which for simplicity of illustration are not shown. Each of these threads T1 through Tn may have a corresponding set of registers. For example, each of threads T1, T2, and Tn may have a corresponding set of registers R1-Ry, where y may be, for example, 8, 16, 32, 64, or some other number. These registers may potentially be registers allocated from a pool rather than dedicated sets of registers. Suitable widths for the registers and data include, but are not limited to, 8-bits, 16-bits, 32-bits, and 64-bits.
The instruction unit 1702 may receive the dot product SIMT instruction 1710. In some embodiments, the dot product SIMT instruction may be a low-level instruction or control signal (e.g., binary microcode, a machine-level instruction, a binary instruction, etc.) that the processor is natively able to execute. In other embodiments, the processor may have circuitry or other logic (e.g., instruction translation or conversion logic) to translate or convert the dot product SIMT instruction into one or more other instructions that the processor is natively able to execute.
The dot product SIMT instruction may optionally have attributes of the other instruction formats discussed elsewhere herein (e.g.,
In some embodiments, the dot product SIMT instruction may have a first source register identifier 1731 to identify a first source register as a source of a first data or operand, a second source register identifier 1732 to identify a second source register as a source of a second data or operand, and a destination register identifier 1733 to identify a destination register where a dot product result is to be stored. In the illustrated example, register R1 is the first source register, register R2 is the second source register, and register R3 is the destination register, although these are only examples. Each of the threads may have a respective copy of the registers R1 and R2. The register R3 is only used in one thread, in this case the first thread T1, since only one dot product result is generated for all the threads and only needs to be stored one thread. Instead of the thread T1 one of the other threads could be used instead. The dot product SIMT instruction may also optionally have one or more fields to specify a datatype for the sources and/or the destination. Alternatively, such datatypes can be prescribed by the opcode. In some embodiments, the dot product SIMT instruction may optionally have a modifier 1734 to convert the dot product operation into a sum operation (e.g., a single bit to have one value for the dot product operation or another different value for the sum operation). The sum operation will be discussed further below.
The SIMT processor 1703 and/or the processing elements PE1 through PEn may perform operations corresponding to the dot product SIMT instruction. In some embodiments, these operations may include multiplying the first data or operands received from the first source registers (e.g., R1) by the second data or operands received from the second source registers (e.g., R2) across all the threads T1 through Tn of the first wavefront 1715 to generate n products. For some possible data sizes, each of the first and second source registers may hold a single value (e.g., each 32-bit register may hold a 32-bit value). For other possible data sizes, each of the first and second source registers may hold a plurality of values (e.g., each 32-bit register may hold two 16-bit values or four 8-bit values). In some embodiments, these operations may include summing the n products to generate a dot product result and then storing the dot product result in the destination register (e.g., R3) of one of the threads (e.g., thread T1). By way of example, this may represent a dot product across pairs of values in each of the threads T1 through Tn of the entire first wavefront. Alternatively, if desired, the dot product SIMT instruction may optionally be a variable wavefront SIMT instruction as described above.
In some embodiments, the first and second data or operands may have a smaller size in bits and/or a lower precision (e.g., if floating-point format is used) than the result dot product. As one example, the first and second data or operands may be half-precision floating point (FP16) or Bfloat16 and the result dot product may be single-precision floating-point (FP32) or double-precision floating point (FP64). As another example, the first and second data or operands may have an 8-bit floating point format (e.g., E5M2 (five exponent bits and two mantissa bits), E4M3 (four exponent bits and three mantissa bits) and the result dot product may be FP16, Bfloat16, FP32, or FP64.
In some embodiments, the dot product SIMT instruction may optionally have the modifier 1734 to convert the dot product operation into a sum operation (e.g., a single bit to have one value for the dot product operation or another different value for the sum operation). By way of example, if the modifier indicates the sum operation, then the elements or value of the second source data or operand may each be replaced with a value of one (1) so that the multiplication of the first source data or operand and the second source data or operand just returns the first source data or operand. These values of the first source data or operand may be summed for each of the threads of the wavefront in place of the products to generate a sum of the values or elements of the first source data or operand across the wavefront in place of the dot product result.
One possible advantage of being able to perform such a dot product operation is that the reduction of the products is incorporated into the operation without needing to perform separate inter-thread communications. The dot product SIMT instruction may also optionally be used to perform matrix multiplications.
The variable wavefront SIMT instructions and the inter-wavefront SIMT instructions disclosed herein may be used for various purposes and algorithms subject mainly to the creativity of the programmer. Likewise, the dot product unit may be used for various purposes and algorithms. However, to further illustrate certain concepts, and to illustrate how these new features may potentially be used in combination and symbiotically in an algorithm, an illustrative example of how these new features may be used to perform reduction (combination) of values of an input vector will be further described. Initially a large number of threads may be initialized and elements of the input vector may be loaded. Consider in this example that there are 512 such threads each having a 32-bit register in which two 16-bit values of the input vector are to be loaded. Initially, a dot product unit as described herein (e.g., the dot product unit 1406, 1806) may be utilized. The dot product unit may reduce the elements in each wavefront into one thread (e.g., the first thread) of that wavefront (e.g., the elements in a wavefront having threads 32 to 47 may be reduced into thread 32). For example, the dot product unit may be used to add the two half-length vectors across the wavefronts and store the sums into the corresponding threads of one lane of threads (e.g., corresponding to one SP or processing element) spanning the wavefronts (e.g., threads T1, Tn+1 ... Tm*n+1 corresponding to the processing element 1604 of
In some embodiments, a processor or SIMT processor may be operative to perform one or more low overhead loop instructions. One embodiment of a low overhead loop instruction is a “load_loop” instruction to initialize a loop and set a loop counter value. Then at a point in the subsequent code, another embodiment of a low overhead loop instruction referred to as a “branch-loop” instruction may be issued, along with a branch address. The branch address can be to any location in the instruction memory. This branch_loop instruction may decrement the loop counter value. If the loop counter is zero, the branch will not be taken and the program counter will increment to the next instruction. Either a single loop depth or multiple nested loop depths may be supported (e.g., using a loop counter stack).
In other embodiments, the GPGPU or other GPU may be a soft GPU programmed into and/or mapped to structures of and/or implemented with a programmable logic device (PLD), such as, for example, a field programmable gate array (FPGA). Programmable logic devices (PLDs), such as field programmable gate arrays (FPGAs), provide remarkable customizability to implement different system designs. Indeed, an incredible variety of circuitry may be implemented on programmable logic fabric of a PLD, including GPUs and other types of processors. When processors are implemented using the programmable logic fabric of a PLD, they may be referred to as “soft logic” processors since they are implemented through a configuration of the programmable logic fabric. Yet, while versatile, soft logic processors are typically lower performance (e.g., floating-point operations per second (FLOPs) and maximum frequency (Fmax)) compared to hard processors. Indeed, soft logic GPUs may be larger (e.g., 100K+ lookup tables (LUTs)) and potentially relatively slower (e.g., 100 MHz - 250 MHz).
With this in mind,
The designers may implement their high-level designs using design software 1937, such as a version of Intel® Quartus® by INTEL CORPORATION. The design software may use a compiler 1938 to convert the high-level program into a lower-level description. The compiler may provide machine-readable instructions representative of the high-level program to a host 1939 and the integrated circuit device 1901. The host may receive a host program 1940 which may be implemented by the kernel programs 1941. To implement the host program, the host may communicate instructions from the host program to the integrated circuit device via a communications link 1942, which may be, for example, direct memory access (DMA) communications or peripheral component interconnect express (PCIe) communications. In some embodiments, the kernel programs and the host may enable configuration of one or more DSP blocks 1943 on the integrated circuit device 1901. The DSP block may include circuitry to implement, for example, operations to perform matrix-matrix or matrix-vector multiplication for artificial intelligence (AI) or non-AI data processing. The integrated circuit device may include many (e.g., from hundreds to thousands) of the DSP blocks. Additionally, the DSP blocks may be communicatively coupled to another such that data output from one DSP block may be provided to other DSP blocks.
While the techniques above discussion described to the application of a high-level program, in some embodiments, the designer may use the design software to generate and/or to specify a low-level program, such as the low-level hardware description languages described above. Further, in some embodiments, the system may be implemented without a separate host program. Moreover, in some embodiments, the techniques described herein may be implemented in circuitry as a non-programmable circuit design. Thus, embodiments described herein are intended to be illustrative and not limiting.
Turning now to a more detailed discussion of the integrated circuit device 1901,
Programmable logic devices, such as integrated circuit device, may contain programmable elements 2049 within the programmable logic 2048. For example, as discussed above, a designer (e.g., a customer) may program (e.g., configure) the programmable logic to perform one or more desired functions. By way of example, some programmable logic devices may be programmed by configuring their programmable elements using mask programming arrangements, which is performed during semiconductor manufacturing. Other programmable logic devices are configured after semiconductor fabrication operations have been completed, such as by using electrical programming or laser programming to program their programmable elements. In general, programmable elements may be based on any suitable programmable technology, such as fuses, antifuses, electrically programmable read-only-memory technology, random-access memory cells, mask-programmed elements, and the like, and combinations thereof.
Many programmable logic devices are electrically programmed. With electrical programming arrangements, the programmable elements may be formed from one or more memory cells. For example, during programming, configuration data is loaded into the memory cells using pins 2046 and input/output circuitry 2045. In one embodiment, the memory cells may be implemented as random-access-memory (RAM) cells. The use of memory cells based on RAM technology is described herein is intended to be only one example. Further, because these RAM cells are loaded with configuration data during programming, they are sometimes referred to as configuration RAM cells (CRAM). These memory cells may each provide a corresponding static control output signal that controls the state of an associated logic component in programmable logic 2048. For instance, in some embodiments, the output signals may be applied to the gates of metal-oxide-semiconductor (MOS) transistors within the programmable logic.
Keeping the foregoing in mind, the DSP block 1943 along with programmable logic 2048 may be used to implement a soft logic GPU, also referred to herein as a soft GPU. The soft logic GPU or soft GPU may have any of the features described elsewhere herein (e.g., have a SIMT architecture, have multiple streaming processors (SP) per streaming multiprocessor (SM), utilize the variable wavefront width aspects, utilize the inter-thread register access aspects, and so on). The control plane (including the instruction fetch, decode, and sequencer, as well as the thread initialization circuitry) for the SMs may optionally be logically separate from the processing plane (including the SMs), so no data or signaling may need to be passed back to the control plane. This may optionally allow the control signals and immediate data buses to be pipelined on the way to the SM. The SMs may contain most of the memory as well as the DSP blocks and may optionally be physically or logically placed in a sector for deterministic performance. The features of floating point 32 (FP32) DSP blocks may be used to increase the efficiency of matrix operations. In contrast to the SM, the control plane may tend to have less logic and may tend to use more random logic (e.g., the SM may be architected as a highly structured design), which may help to close timing at similar performance levels to the SM without significant compilation constraints.
In some embodiments, the soft GPU may optionally include a special function unit (SFU) to provide additional special and potentially complex functionality, such as elementary functions. The ability to include such an SFU is one advantage of the FPGA or other PLA design. By way of example, the SFU may provide a specific function such as, for example, an inverse square root function, multiple trigonometric operations, and so on.
Current GPUs often run at a frequency of around 1 GHz with overclock of around 1.4 GHz. FPGA soft logic is often slower than ASIC logic, so the soft GPU may have a frequency of less than 1 GHz. In some embodiments, the soft GPU may run at a high frequency for an FPGA (e.g., optionally up to around 1 GHz).
In some cases, the soft GPU may tend to have reduced memory capability as compared to a hard GPU (e.g., especially in the writeback phase to shared memory). In some cases, a true dual port (i.e., two read ports and two write ports) may optionally be supported by the soft GPU. In other cases, since multi-ported memories tend to be expensive, they may instead be emulated (e.g., using an internal multi-cycle operation), rather than being supported through a dedicated hardware solution. Such emulation may tend to reduce the maximum frequency of the memory in this mode. Different memory architectures (e.g., numbers of read and write ports and memory size) may be used according to the tradeoffs considered appropriate for the implementation. Reduced memory capability, if present, may also be partly mitigated using the thread snooping and variable wavefront and thread depth instruction extensions discussed above. It is to be appreciated that even though, in some implementations, the soft GPU may have one or more reduced performance attributes as compared to a hard GPU the use of the soft GPU may be useful for other reasons (e.g., flexibility, customizability, etc.).
In addition to the embodiments described above, other embodiments pertain to an example embodiment of a soft GPU discussed further below. The example embodiment of the soft GPU discussed further below may optionally use any of the embodiments discussed above (e.g., the variable wavefront SIMT instruction 1507, the inter-wavefront register access SIMT instruction 1608, the dot product SIMT instruction 1710). However, the embodiments discussed above (e.g., the variable wavefront SIMT instruction 1507, the inter-wavefront register access SIMT instruction 1608, the dot product SIMT instruction 1710) are certainly not limited to the example embodiment of the soft GPU discussed further below. Rather, the embodiments discussed above (e.g., the variable wavefront SIMT instruction 1507, the inter-wavefront register access SIMT instruction 1608, the dot product SIMT instruction 1710) may each be implemented in any of the other SIMT processors or GPUs discussed elsewhere herein including hard GPUs.
The instruction unit of the soft GPU may include an instruction fetch unit to determine the next instruction memory address. In some cases, sophisticated logic may be used to make such a determination, since many instructions run for many cycles, although some can be modified on an instruction-by-instruction basis to run another number of cycles, or just a single cycle. Zero overhead loops, subroutines, and simple branches may also impact the address generation. A relatively wide (e.g., 40-bit) instruction word is defined. The program length is relatively short for this type of soft GPU and its intended uses, and 40-bits is a directly supported width in certain commercially available FPGAs, so this is a reasonable implementation choice. The sequencer may track the number of cycles per operation and control that the correct wavefront is being accessed. Each thread register space in the SPs may be initialized with a thread identifier (ID) that may be used to identify it (e.g., and multiple dimensions may optionally be supported). Individual thread IDs are typically used for address generation. In some embodiments, the ISA may include instruction(s) to load thread IDs created by the thread generator and load them into the corresponding thread register space. In some embodiments, certain simplifications may optionally be made to the instruction unit to help increase the speed. For example, simplifications may be made in branching support. By way of example, a branch taken will potentially invalidate the following two instructions, so two NOPs may optionally be introduced after a branch instruction, whether the branch instruction is taken or not. This may include the subroutine jumps and returns, unconditional branches, and zero overhead loops.
The critical path of this architecture is in the instruction fetch portion, with several paths returning approximately the same performance. The instruction section, which includes the instruction fetch, instruction decode, wavefront sequencer, and the thread ID generator, is relatively small. The instruction memory also forms part of this section. An example instruction memory may include a 1 K x 40-bit memory implemented in two M20Ks. The instruction memory may be reloaded with a new program from outside the soft GPU. There may be one or more relatively long combinatorial paths in this section, most of which are feedback into the instruction fetch portion. For example, one may be the immediate branch value from the instruction memory to the program counter. This may be pipelined, although it would increase the branch penalty from two to three, making some programs less efficient. Another critical path may be the calculation of the signal which indicates that the current instruction is complete and the program counter can be incremented. Such a calculation or signal may be based on various possible conditions, such as, for example, whether the instruction is single cycle or multi-cycle, if the wavefront is complete (e.g., several dynamic partial wavefront controls may be possible in some embodiments), if the load or store operations are complete (e.g., in some embodiments there may be multiple partial run options).
In some cases, the control plane (including the instruction fetch, decode, and sequencer, as well as the thread initialization circuitry) for the SMs may be logically separate from the processing plane. In some cases, no data or signaling may be passed from the processing plane to the control plane. In some cases, there may be no data dependent branches made, only loop dependent decisions, which are all contained in the instruction portion. There are no data dependent operations in the SIMT processor that impact the instruction unit. There may be certain data dependent decisions in the SIMT processor, but there is no decision information fed back to the instruction unit (e.g., instruction fetch or sequencer).
This may allow various levels of pipelining between the instruction unit and the SIMT processor. For example, the control signals and immediate data buses may be pipelined on the way to the SM. This will eventually let the SM be floor planned or placed, relatively independently of the instruction portion, making it easier to close timing on even large, complex, system designs. Also, the development of their fitting characteristics and placement work may be relatively independently. As the instruction core is relatively small, it should have similar placement and performance characteristics in a wide variety of environments. As the structure of these two sections tend to be different - the instruction section has relatively more random logic and the SIMT processor has relatively more data paths - we can more easily close timing on systems using the soft GPU, either as an automatically placed design, or as the concatenation of two carefully floor planned components. The SM contains most of the memory and all the DSP blocks and can be physically or logically placed in a sector for deterministic performance.
For the soft GPU, most functional logic is implemented in embedded FPGA features, such as, for example, the M20K Intel FPGAs. Some of the integer ALU may be constructed in soft logic, but much of the remaining logic in the SM may be mostly multiplexers and registers, which are typically directly and efficiently supported by FPGAs. The soft GPU may be compiled in a Stratix 10 1SG280LN2F43E1VG device using Quartus 20.3 Prime, for example. In Stratix 10 this design may optionally have a clock frequency of around 500 MHz.
Components, features, and details described for any of the GPUs or other processors disclosed herein may optionally apply to any of the methods disclosed herein, which in embodiments may optionally be performed by and/or with such GPUs or processors. Any of the GPUs or other processors described herein in embodiments may optionally be included in any of the systems disclosed herein. Any of the instructions disclosed herein may optionally be performed by any of the GPUs or other processors disclosed herein.
References to “one example,” “an example,” etc., indicate that the example described may include a particular feature, structure, or characteristic, but every example may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same example. Further, when a particular feature, structure, or characteristic is described in connection with an example, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other examples whether or not explicitly described.
GPUs and their components disclosed herein may be said and/or claimed to be operative, operable, capable, able, configured, adapted, or otherwise “to” perform an operation. For example, a GPU may be said and/or claimed “to” perform operations corresponding to an instruction. As used herein, these expressions refer to the characteristics, properties, or attributes of the GPU or its components when in a powered-off state, and do not imply that the GPU or its components is currently operating or powered up. For clarity, it is to be understood that the GPUs and their components as claimed herein are not powered on or running.
In the description and claims, the terms “coupled” and/or “connected,” along with their derivatives, may have be used. These terms are not intended as synonyms for each other. Rather, in embodiments, “connected” may be used to indicate that two or more elements are in direct physical and/or electrical contact with each other. “Coupled” may mean that two or more elements are in direct physical and/or electrical contact with each other. However, “coupled” may also mean that two or more elements are not in direct contact with each other, but yet still cooperate or interact with each other. For example, a SIMT processor may be coupled with an instruction unit by one or more intervening components. In the figures, arrows are used to show connections and couplings.
Some embodiments include an article of manufacture (e.g., a computer program product) that includes a machine-readable medium. The medium may include a mechanism that provides, for example stores, information in a form that is readable by the machine. The machine-readable medium may provide, or have stored thereon, an instruction or sequence of instructions, that if and/or when executed by a machine are operative to cause the machine to perform and/or result in the machine performing one or operations, methods, or techniques disclosed herein.
In some embodiments, the machine-readable medium may include a tangible and/or non-transitory machine-readable storage medium. For example, the non-transitory machine-readable storage medium may include a floppy diskette, an optical storage medium, an optical disk, an optical data storage device, a CD-ROM, a magnetic disk, a magneto-optical disk, a read only memory (ROM), a programmable ROM (PROM), an erasable-and-programmable ROM (EPROM), an electrically-erasable-and-programmable ROM (EEPROM), a random access memory (RAM), a static-RAM (SRAM), a dynamic-RAM (DRAM), a Flash memory, a phase-change memory, a phase-change data storage material, a non-volatile memory, a non-volatile data storage device, a non-transitory memory, a non-transitory data storage device, or the like. The non-transitory machine-readable storage medium does not consist of a transitory propagated signal. In some embodiments, the storage medium may include a tangible medium that includes solid-state matter or material, such as, for example, a semiconductor material, a phase change material, a magnetic solid material, a solid data storage material, etc. Alternatively, a non-tangible transitory computer-readable transmission media, such as, for example, an electrical, optical, acoustical or other form of propagated signals - such as carrier waves, infrared signals, and digital signals, may optionally be used.
Examples of suitable machines include, but are not limited to, GPUs, GPGPUs, FPGAs, digital logic circuits, integrated circuits, computer systems, electronic devices. Examples of suitable computer systems and electronic devices include, but are not limited to, desktop computers, laptop computers, tablet computers, smartphones, servers, set-top boxes, video game controllers, and the like.
Moreover, in the various examples described above, unless specifically noted otherwise, disjunctive language such as the phrase “at least one of A, B, or C” or “A, B, and/or C” is intended to be understood to mean either A, B, or C, or any combination thereof (i.e. A and B, A and C, B and C, and A, B and C).
When introducing elements of various embodiments of the present disclosure, the articles “a,” “an,” and “the” are intended to mean that there are one or more of the elements. The terms “comprising,” “including,” and “having” are intended to be inclusive and mean that there may be additional elements other than the listed elements. Additionally, references to “one embodiment” or “an embodiment” of the present disclosure are not intended to be interpreted as excluding the existence of additional embodiments that also incorporate the recited features.
While the embodiments set forth in the present disclosure may be susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and have been described in detail herein. However, it should be understood that the disclosure is not intended to be limited to the particular forms disclosed. The disclosure is to cover all modifications, equivalents, and alternatives falling within the spirit and scope of the disclosure as defined by the following appended claims.
The techniques presented and claimed herein are referenced and applied to material objects and concrete examples of a practical nature that demonstrably improve the present technical field and, as such, are not abstract, intangible or purely theoretical. Further, if any claims appended to the end of this specification contain one or more elements designated as “means for [perform]ing [a function]...” or “step for [perform]ing [a function]...”, it is intended that such elements are to be interpreted under 35 U.S.C. 112(f). However, for any claims containing elements designated in any other manner, it is intended that such elements are not to be interpreted under 35 U.S.C. 112(f).
In the description above, specific details have been set forth in order to provide a thorough understanding of the embodiments. However, other embodiments may be practiced without some of these specific details. Various modifications and changes may be made thereunto without departing from the broader spirit and scope of the disclosure as set forth in the claims. The specification and drawings are, accordingly, to be regarded in an illustrative rather than a restrictive sense. The scope of the invention is not to be determined by the specific examples provided above, but only by the claims below. In other instances, well-known circuits, structures, devices, and operations have been shown in block diagram form and/or without detail in order to avoid obscuring the understanding of the description.
The following examples pertain to further embodiments. Specifics in the examples may be used anywhere in one or more embodiments.
Example 1 is a GPU, SIMT processor, other processor including an instruction unit to receive a single instruction, multiple thread (SIMT) instruction. The SIMT instruction has at least one field to provide at least one value. The at least one value is to indicate a plurality of threads that are to execute the SIMT instruction. The processor also includes a SIMT processor coupled with the instruction unit. The SIMT processor is to execute the SIMT instruction for each of the plurality of threads.
Example 2 includes the processor of Example 1, where the at least one value is to indicate the plurality of threads as being only a subset of a plurality of threads configured for the processor.
Example 3 includes the processor of any one of Examples 1 to 2, where the at least one value is to indicate the plurality of threads as being only a subset of a plurality of threads initialized to execute code including the SIMT instruction.
Example 4 includes the processor of any one of Examples 1 to 3, where the SIMT processor includes a plurality of processing elements initialized to execute a plurality of threads of a parallel thread group concurrently, and optionally where the at least one value is to indicate that only a subset of the plurality of processing elements are to execute the SIMT instruction.
Example 5 includes the processor of Example 4, where the subset is one of only a single thread, only half the plurality of processing elements, only one quarter the plurality of processing elements, or only one eighth the plurality of processing elements.
Example 6 includes the processor of any one of Examples 4 to 5, where the plurality of threads of the parallel thread group are a warp or a wavefront.
Example 7 includes the processor of any one of Examples 1 to 6, where the SIMT processor includes a plurality of processing elements initialized to execute a plurality of threads of a parallel thread group concurrently, and optionally where the at least one value is to indicate a number of times the plurality of processing elements are to execute the SIMT instruction sequentially.
Example 8 includes the processor of any one of Examples 1 to 7, where the at least one value is to indicate only a subset of warps initialized to execute code including the SIMT instruction or only a subset of wavefronts initialized to execute code including the SIMT instruction.
Example 9 includes the processor of any one of Examples 1 to 8, where the instruction unit is to receive a second SIMT instruction, the second SIMT instruction having at least one field to provide a source thread identifier and at least one field to provide a source register identifier.
Example 10 includes the processor of Example 9, where a processing element of the SIMT processor is to execute the second SIMT instruction for a first thread to receive data from a register that is to be identified by the source register identifier of a second, different thread that is to be identified by the source thread identifier.
Example 11 includes the processor of any one of Examples 9 to 10, where the second SIMT instruction has at least one field to provide a second source thread identifier and at least one field to provide a second source register identifier, and where the processing element is to execute the second SIMT instruction for the first thread to receive data from a second register that is to be identified by the second source register identifier of a third, different thread that is to be identified by the second source thread identifier.
Example 12 includes the processor of any one of Examples 9 to 11, where the second SIMT instruction has at least one field to provide a destination thread identifier and at least one field to provide a destination register identifier, and where the processing element is to execute the second SIMT instruction for the first thread to store a result in a third register that is to be identified by the destination register identifier in a third, different thread that is to be identified by the destination thread identifier.
Example 13 is a method including receiving a single instruction, multiple thread (SIMT) instruction. The SIMT instruction has at least one field providing at least one value. The at least one value indicating a plurality of threads that are to execute the SIMT instruction. The method also includes executing the SIMT instruction for each of the plurality of threads on a SIMT processor.
Example 14 includes the method of Example 13, where the at least one value indicates the plurality of threads as being only a subset of a plurality of threads configured and initialized to execute code including the SIMT instruction.
Example 15 includes the method of any one of Examples 13 to 14, where the at least one value indicates that only a subset of a plurality of processing elements initialized to execute a plurality of threads of a parallel thread group concurrently are to execute the SIMT instruction concurrently.
Example 16 includes the method of any one of Examples 13 to 15, where the at least one value indicates a number of times a plurality of processing elements, which are to execute threads of a thread group concurrently, are to execute the SIMT instruction sequentially.
Example 17 includes the method of any one of Examples 13 to 16, further including receiving a second SIMT instruction, the second SIMT instruction having at least one field providing a source thread identifier and at least one field providing a source register identifier. And, further including executing the second SIMT instruction with a processing element of a SIMT processor for a first thread to receive data from a register identified by the source register identifier of a second, different thread identified by the source thread identifier.
Example 18 is a system including a processor including an instruction unit to receive a single instruction, multiple thread (SIMT) instruction. The SIMT instruction has at least one field to provide at least one value. The at least one value to indicate a plurality of threads that are to execute the SIMT instruction. The processor also includes a SIMT processor coupled with the instruction unit. The SIMT processor is to execute the SIMT instruction for each of the plurality of threads. The system also includes a dynamic random access memory (DRAM) coupled with the processor.
Example 19 includes the system of Example 18, where the SIMT processor includes a plurality of processing elements initialized to execute a plurality of threads of a parallel thread group concurrently, and optionally where the at least one value is to indicate that only a subset of the plurality of processing elements are to execute the SIMT instruction.
Example 20 includes the system of any one of Examples 18 to 19, where the SIMT processor includes a plurality of processing elements initialized to execute a plurality of threads of a parallel thread group concurrently, and optionally where the at least one value is to indicate a number of times the plurality of processing elements are to execute the SIMT instruction sequentially..
Example 21 is a GPU, SIMT processor, other processor, or other apparatus operative to perform the method of any one of Examples 13 to 17.
Example 22 is a GPU, SIMT processor, other processor, or other apparatus including means for performing the method of any one of Examples 13 to 17.
Example 23 is a GPU, SIMT processor, other processor, or other apparatus including any combination of modules and/or units and/or logic and/or circuitry and/or means operative to perform the method of any one of Examples 13 to 17.
Example 24 is an optionally non-transitory and/or tangible machine-readable medium, which optionally stores or otherwise provides instructions including a SIMT instruction, the SIMT instruction if and/or when executed by a processor, computer system, electronic device, or other machine, is operative to cause the machine to perform the method of any one of Examples 13 to 17.
This application claims the benefit of U.S. Provisional Application No. 63/443,314, filed Feb. 3, 2023, which is hereby incorporated by reference.
Number | Date | Country | |
---|---|---|---|
63443314 | Feb 2023 | US |