The disclosure herein relates to executing multiply-accumulate (MAC) operations in a multi-threaded computer architecture, particularly relates to performing MAC operations using a plurality of vector processing units of one processor.
Graphics processing unit (GPU) architecture has provided a way to execute parallel threads in a Single Instruction Multiple Threads (SIMT) fashion. A SIMT processor such as a GPU has many cores configured to execute multiple threads simultaneously and is especially suitable for massive parallel computing applications. To take advantage of multiple cores for parallel execution, computer programs normally have to be tailored for the multi-core architecture by invoking functions of an Application Programming Interface (API) specifically designed to work on multiple cores. More recently, general-purpose computing on GPUs (GPGPU), which makes use of a GPU to perform computation in applications traditionally handled by the central processing unit (CPU), becomes more practical and popular.
SIMT processors such as GPUs have been used extensively for machine learning based data processing, such as, but not limited to artificial neural networks for image processing, natural language processing, etc. These data processing involves massive matrix multiplications. The basic operations for matrix multiplication are the Multiply-Accumulate (MAC) operations. Therefore, there is a need in the art for a processor to efficiently perform MAC operations.
The present disclosure describes apparatus, methods, and systems for efficiently performing multiply-accumulate (MAC) operations in one processor. In an exemplary embodiment, a processor may comprise a memory unit and a plurality of columns of vector processing units coupled to the memory unit. Each column of the vector processing units may comprise a memory port (MP) configured for vector data access operations and a processing element (PE) having a vector Arithmetic Logic Unit (ALU) configured for vector data processing. The MP of each column of the vector processing units may comprise: an input buffer to store a first matrix loaded from the memory unit and a vector multiply-add (MAD) unit that contains a plurality of MAD units. Each MAD unit of the plurality of MAD units may comprise: a first input coupled to one of storage units of the input buffer holding a row of the first matrix, a second input coupled to the memory unit to load a second matrix from the memory unit, and a plurality of multipliers and a multiple-input adder. The plurality of multipliers may be configured to generate a set of multiplication results with each multiplication result being generated by multiplying an element of a row of the first matrix and a corresponding element of the second matrix. The vector MAD unit may be configured to generate a vector of MAD results with each multiple-input adder configured to generate a MAD result of the vector of MAD results by adding all multiplication results of the set of multiplication results generated by the plurality of multipliers of a same MAD unit as the respective multiple-input adder. The MP may be configured to output the vector of MAD results as a vector input to the vector ALU of a PE of a same column as the MP.
In another exemplary embodiment, a method may comprise: loading a first matrix into a buffer of a memory port of a column of vector processing units, loading a second matrix into the memory port. The buffer may be an input buffer to a vector multiply-add (MAD) unit that contains a plurality of MAD units, each row of the first matrix may be a first input to a respective MAD unit of the vector MAD unit, and the second matrix may be a common second input to each MAD unit of the vector MAD unit. The method may further comprise generating, in each MAD unit of the vector MAD unit, a respective set of multiplication results by multiplying elements of a respective row of the first matrix and corresponding elements of the second matrix, generating a vector of MAD results, and outputting the vector of MAD results from the memory port as a vector input to a vector Arithmetic Logic Unit (ALU) of a processing element of the column of vector processing units. Each MAD result may be generated by adding all multiplication results in a respective set of multiplication results in a respective MAD unit of the vector MAD unit.
Reference will now be made in detail to the embodiments of the present teaching, examples of which are illustrated in the accompanying drawings. Like elements in the various figures are denoted by like reference numerals for consistency. While the present teaching will be described in conjunction with the embodiments, it will be understood that they are not intended to limit the present teaching to these embodiments. On the contrary, the present teaching is intended to cover alternatives, modifications, and equivalents, which may be included within the spirit and scope of the present teaching as defined by the appended claims.
In addition, in the following detailed description of embodiments of the present teaching, numerous specific details are set forth in order to provide a thorough understanding of the present teaching. However, it will be recognized by one of ordinary skill in the art that the present teaching may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the embodiments of the present teaching.
In some embodiments, the processor 100 may be configured for massive thread level parallel processing. For example, one processing element (PE) in the PE array 114 may comprise a vector Arithmetic Logic Unit (ALU) with a vector size more than one and each ALU of a vector ALU may be configured to perform same operation but on different data (e.g., each thread may operate on its own data). That is, in these embodiments with multiple ALUs, each PE may be configured to operate in a Single Instruction Multiple Threads (SIMT) fashion. In one embodiment, a PE with multiple vector data inputs may generate one vector data output. In some embodiments, a thread may also be referred to as a stream.
To provide data for multiple threads to be executed concurrently, in some embodiments, some relevant electronic connections between components of the processor 100 may be in vectors. For example, a vector address connection of K×G bits may have K number of G-bit addresses and a vector data connection of K×M bits may have K number of M-bit words of data. It should also be noted that although not shown in any of the figures, data or address connections between different components may be accompanied by one or more signal lines. For example, a busy signal line may exist between a first component and a second component, and may be used by the first component to send a busy signal to the second component indicating that the first component is not ready to accept valid data or address signals. Moreover, a valid signal line may also exist between the first and second components, and may be used by the second component to send a valid signal to the first component indicating that valid data or address signals have been put on the connection wires.
The configuration memory 104 may store data path programs including arithmetic and logical instructions, and load and store instructions for data path components. In one embodiment, the data path programs stored in the configuration memory 104 may be sequence(s) of compiled instructions. For example, a data path program may include instructions to be executed by the PE array 114, which may specify what kind of operations PEs may perform, and how data path components may hold or transmit data.
The sequencer 106 may decode the instructions stored in the configuration memory 104. The instructions may include scalar instructions and vector instructions. For a scalar instruction, the sequencer 106 may decode the scalar instruction and perform the scalar operation coded by the scalar instruction. For a vector instruction, the sequencer 106 may decode the vector instruction and deliver the decoded vector instruction to various components of the PE array 114 (e.g., components of the PE array 114 that will be involved in arithmetic and logical operations, and data movement operations), such that the vector operations coded by the vector instruction may be performed by the components of the PE array 114. These components of the PE array 114 may also be referred to as vector processing units. As used herein, a scalar operation may refer to execution of a scalar instruction and a vector operation may refer to execution of a vector instruction.
The decoded vector instruction may be delivered to various components in a package, which may be referred to as a configuration package or simply a configuration. In addition to the decoded instruction, a configuration package for one component may include some other parameters (e.g., the number of warps specifying how many times an instruction is to be repeatedly executed and how many times data passes through a data switching unit in one configuration setting, and immediate values passed from the sequencer to the component). As used herein, a warp may refer to a number of threads concurrently executed in one PE, for example, for a PE with 32 ALUs, a warp may refer to 32 threads executed in parallel by a PE. In one embodiment, a physical data path configuration may be referred to as a physical data path program, which may comprise individual configurations for various components included in a physical data path.
Although not shown, there may be a configuration bus connecting the sequencer 106 to the components of the PE array 114 for individual configurations to be delivered to these components via the bus. For example, there may be a configuration bus for delivering the configurations for the memory ports, switch boxes and PEs. In some embodiments, the configuration for the memory ports may include data preparation instructions, such as but not limited to, LOAD/STORE instructions (and parameters, such as addresses, for the instructions), and the configuration for the PEs may include instructions to be executed by the ALUs in the PEs, such as but not limited to, data crunching instructions like addition or subtraction.
The memory unit 112 may be a data staging area to store data received from the external bus 130 and also execution result data generated by the PE array 114 (before these results may be transmitted away via the external bus 130). In some embodiments, the memory unit 112 may be an in-processor cache for a large memory system external of the processor 100.
The PE array 114 may comprise a plurality of memory ports (MPs) 120.1-120.N, a plurality of switch boxes (SBs) 122.1-122.N, and a plurality of processing elements (PEs) 118.1-118.N. These components may form N columns of programmable hardware units or programmable hardware components. For example, the MP 120.1, SB 122.1, and PE 118.1 may form the first column of the PE array 114, and the MP 120.N, SB 122.N, and PE 118.N may form the Nth column of the PE array 114. Each column of programmable hardware units may also be referred to as a column of vector processing units or simply a column in an embodiment in which each of these hardware units is configured for vector processing. In the example shown in
The plurality of MPs 120.1-120.N may be programmable hardware units controlling data flow between the PE array 114 and the memory unit 112. As shown in
With the exception of MP 120.1 and MP 120.N, all MPs may be coupled to two adjacent MPs such that each MP may be configured to receive addresses from a preceding MP and/or transmit addresses to a succeeding MP. The electronic coupling between MPs may provide a one-way flow of addresses (e.g., if one configuration specifies that addresses may flow from one MP to a next MP). For example, as shown in
The gasket memory 116 may be used as a data buffer, for example, first-in-first-out (FIFO), to collect addresses and data from the PE array (e.g., from MP 120.N, PE 118.N and/or SB 122.N) and feed them back to the PE array (e.g., to MP 120.1, and/or SB 122.1) when the first column of the PE array (e.g., MP 120.1, SB 122.1, and PE 118.1) is reconfigured by new configurations.
As illustrated in
The SBs 122.1-122.N may be configured to provide data switching for data to be routed between neighboring PEs, from a PE to a MP, from a PE to the data routing buses, from a MP to a PE, from a MP to the data routing buses, from the data routing buses to a PE, and from the data routing buses to a MP. For example, the switch box 122.1 may be configured to provide data switching for data to be delivered to the processing element 118.1 from the gasket memory 116, the MP 120.1 or both. Moreover, the switch box 122.1 may be configured to route data from the gasket memory 116 to the MP 120.1. As another example, the switch box 122.2 may be configured to provide data switching for data to be delivered to the processing element 118.2 from the processing element 118.1, the MP 120.2, and/or the SB 122.1. Moreover, the switch box 122.2 may be configured to route data from the processing element 118.1 to the MP 120.2 or SB 122.3, from the SB 122.1 to the MP 120.2 or SB 122.3. In yet another example, the switch box 122.N may be configured to provide data switching for data to be delivered to the processing element 118.N from the PE 118.N−1, the MP 120.N, the SB 122.N−1 or any combination of the three sources. Moreover, the switch box 122.N may be configured to route data between PE 118.N−1, MP 120.N, SB 122.N−1 and the gasket memory 116. A SB may also be referred to as a data switching unit.
In some embodiments, output ports of vector processing units (e.g., each MP, each SB, and each PE) may be vector address or vector data ports. Address or data buffers at the output ports may be treated as vector registers. For example, a data buffer at one output port of the PE 118.1 coupled to the SB122.2 may be treated as a vector register for holding a vector of input values to the SB122.2. A data buffer at another output port of the PE 118.1 coupled to the SB122.2 may be treated as another vector register to hold another vector of input values to the SB 122.2. Also, data buffers at output ports of the SB 122.1 coupled to the SB 122.2 may be treated as vector registers for holding vectors of data values to be passed to the SB 122.2.
In one embodiment, data or address buffers at output ports of vector processing units may be mapped to vector registers labeled as VA, VB, IA, IB, IC, ID, IE, IF, IG, IH and DVA. VA and VB may be vector registers for output data buffers of a PE. IA, IB, IC, ID, IE, IF, IG, and IH may be vector registers for output data buffers of a SB at output ports coupled to input ports of the succeeding SB or the gasket memory. DVA may be a vector register for an output address buffer of a MP. Also, virtual vector registers MA and MB may be mapped to data connections from a MP to a SB so that the SB may route data read from the memory unit 112 to input ports of a PE. MA and MB may represent data obtained by the shared memory access and by the private memory access, respectively. The width of DVA may be K×G bits. The widths of other vector registers may be K×M bits. The width of ALUs in a PE may be configured to be either M bits (one word) or 2×M bits (two words). To support 2×M-bit operations, 2 vector registers may be concatenated into a register pair and labeled as VAB, IAB, ICD, IEF, IGH, and MAB. For example, IAB may indicate a concatenated vector register pair (IB, IA) with IB being the higher M bits and IA being the lower M bits. Here (,) denotes component by component concatenation of M-bit data from 2 vector registers.
An exemplary data path may be illustrated by the exemplary internal connections of the SBs 122.1 to 122.N. For example, as shown in
To simplify wording, a MP (or a MP 120) may refer to one of the MPs 120.1-120.N, a SB (or a SB 122) may refer to one of the SBs 122.1-122.N, and a PE (or a PE 118) may refer to one of the PEs 118.1-118.N.
A mixed-scalar-vector instruction set may be defined for the processor 100. The MPs 120.1-120.N, SBs 122.1-122.N, and PEs 118.1-118.N may be vector processing units of the processor 100 and the sequencer 106 may be a scalar processing unit of the processor 100. The instruction set may be designed such that one instruction may be executed by one programmable unit. For example, in one embodiment, each instruction may have 32 bits and certain bits of each instruction (e.g., the most significant 4 bits, the least significant four bits, or other bits) may identify the unit (e.g., the sequencer 106 or one component of the PE array 114) that executes the instruction.
Kernel programs may be made up of series of group of instructions, which may include a sequence of scalar and vector instructions. Scalar instructions may generate parameters used in vector instructions and manage loops and branches. Vector instructions may configure data paths in columns, control data flow and perform data crunching operations. One group of instructions may configure one column. An instruction for a PE including a no-operation (NOP) may be a delimiter of the group. In some embodiments, instructions in one group may be organized such that scalar instructions that generate parameters for vector instructions are placed before vector instructions. Columns do not directly decode vector instructions. Instead, the sequencer 106 may decode scalar and vector instructions, execute decoded scalar instructions and package decoded vector instructions in configurations and send configurations of memory ports (MPs 120.1-120.N), switch boxes (SBs 122.1-122.N), and processing elements (PEs 118.1-118.N) to columns.
In various embodiments, the processor 100 may be configured to perform Single Instruction Multiple Threads (SIMT) execution. A group of threads may form a block (e.g., a thread block) and a group of blocks may be organized into a grid. The organization of grid and block may be defined for a kernel program before the execution of the kernel program. Each block and thread may have a unique block and thread identifier (e.g., block ID and thread ID), respectively. The block ID for a block in a three-dimensional grid may be calculated as blockId=blockIdx.x+(blockIdx.y*gridDim.x)+(blockIdx.z*(gridDim.x*gridDim.y)). Variables blockIdx.x, blockIdx.y, and blockIdx.z may be the block ID in the x-axis, y-axis, and z-axis of the block, respectively. Variables gridDim.x and gridDim.y may be grid dimensions in the x-axis and y-axis, respectively. The “*” operator is the multiplication operator. The thread ID for a thread in a three-dimensional block may be calculated as threadld=blockId*(blockDim.x*blockDim.y*blockDim.z)+threadIdx.x+(threadIdx.y*blockDim.x)+(threadIdx.z*(blockDim.x*blockDim.y)). Variables threadIdx.x, threadIdx.y, and threadIdx.z may be the thread ID in the x-axis, y-axis, and z-axis of the thread, respectively. Variables blockDim.x, blockDim.y, and blockDim.z may be block dimensions in the x-axis, y-axis, and z-axis, respectively.
As used herein, the capital letters X, Y, and Z may refer to dimensions of a thread block in the x-axis, y-axis, and z-axis of the block, respectively. In one embodiment, the values for X, Y, and XYZ (e.g., the multiplication product of X times Y times Z) may be set by the system in the local memory of the sequencer 106 before launching a kernel. And the sequencer 106 may load X, Y and XYZ from the local memory, store them in scalar registers of the sequencer 106 and deliver them as immediate values in configurations to columns.
The sequencer 106 may decode instructions sequentially, and dispatch configurations to target columns. In some embodiments, the sequencer 106 may decode one or more vector instructions to be executed by a MP, a SB and a PE of one column together, generate a set of configurations for the MP, the SB and the PE of the same column and dispatch the set of configurations to a target column in a batch. The configurations for vector processing units may include immediate values generated by executing scalar instructions at the sequencer 106.
Multiply-Accumulate (MAC) operation accumulates products of 2 input numbers. Suppose c and d are input arrays of n elements and a is a MAC result, the MAC operation of c and d may be expressed as α=τi=0n−1 ci× di.
A large amount of Multiply-Accumulate (MAC) operations are needed in various computation tasks, especially for various machine learning based data processing. Embodiments of the processor 100 may provide hardware supported MAC operations for integer, floating point, or both types of numbers. In one embodiment, for integer MAC operations, the elements of two input arrays c and d may be 8-bit integer numbers, and the accumulation number a may be a 32-bit integer number. For floating point MAC operations, the elements of two input arrays c and d may be 16-bit floating point numbers (e.g., 16 bits Brain floating-point format (bfloat16 or BF16)) and the accumulation number a may be a 32-bit floating point number. Since a warp of K threads may be processed in parallel, the hardware may perform K concurrent MAC operations which may be expressed, by using matrices A,
A is a K×1 column matrix containing K MAC results of respective threads. C is a K×n matrix containing a constant vector of K arrays with n elements such as filter coefficients for respective threads. D is a n×1 column matrix containing common data to all threads. In one embodiment, the floating-point format for the accumulation number a may be single-precision floating-point format (FP32 or float32).
The MAC operations of input matrices C and D may be broken into accumulation of partial MAC operations of K×m matrix Cj and m×1 matrix Dj such that
n is assumed to be a multiple of m. If n is not a multiple of m, zero elements may be added to make n a multiple of m. In some embodiments, the partial MAC operation for one thread may be referred to as a Multiply-Add (MAD) operation. The K concurrent MAD operations to calculate Bj may be distributed among S consecutive columns indexed from 0 to S−1 forming a processing chain of S columns. The MAC operation may be carried out in 2 steps in each column, a MAD step followed by an accumulation step. The MAD step may be performed by MAD units implemented in a memory port (e.g., MP 120.1-120.N). The accumulation may be performed by ALUs in a processing element (e.g., PE 118.1-118.N). Each MP may be configured to transfer MAD results Bj to a PE in the same way as data loaded from the memory unit 112. In one embodiment, a MAD unit may calculate 16 products and their summation for integer (e.g., m=16) or 8 products and their summation for floating point (e.g., m=8). A PE in a column j may perform the accumulation by calculating A+Bj, with both A and Bj provided to the PE as 2 operands of an ADD instruction, with A provided to the PE from a preceding PE and Bj provided to the PE from a MP of the same column as the PE.
Data received from the data inputs 210.1 through 210.6 may be denoted as A, B, C, D, E, and F. Data sent to the data outputs 208.1 and 208.2 may be denoted as VA and VB. In an embodiment in which the ALU 202 may be one ALU, the widths of the data inputs 210.1 through 210.6 and the data outputs 208.1 and 208.2 may be M bits. The width of the ALU may be configured to be either M bits (one word) or 2×M bits (two words) by the configuration. If the width is M bits, the inputs of the ALU are A, B, and C. The output of the ALU is VA. If the width is 2×M bits, the inputs of the ALU are (B, A), (D, C) and (F, E). The output of the ALU is (VB, VA). Here (,) denotes concatenation of M-bit data. For example, when M is 8, inputs and outputs of ALU may be 8 bits or 16 bits; when M is 16, inputs and outputs of ALU may be 16 bits or 32 bits; when M is 32, inputs and outputs of ALU may be 32 bits or 64 bits; and so on. Input data A, B, C, D, E, and F, and output data VA and VB may be M bits. In an embodiment in which the ALU 202 may be a vector ALU, the data inputs 210.1 through 210.6 and the data outputs 208.1 and 208.2 may be vectors of K×M bits. And input data A, B, C, D, E, and F, and output data VA and VB may be vectors of K×M bits.
The data buffers 204.1 and 204.2 may be coupled to the data outputs 208.1 and 208.2 to temporarily store output data. The data buffers 204.1 and 204.2, which may be mapped to the vector registers VA and VB, respectively, may be used to decouple the timing of PEs from that of the succeeding SBs or the gasket memory. In one embodiment, the buffers may be implemented as FIFOs (e.g., a D-FIFO for a data buffer, a C-FIFO for a configuration buffer).
The configuration buffer C-FIFO 214 may receive configurations from the configuration input 212, which may be coupled externally to the sequencer 106 via the configuration bus, and store the received configurations before any execution of a data path starts. The configurations for the PE 200 may be referred to as PE configurations. The PE 200 may be statically configured while processing a thread block, e.g., the PE 200 may be programmed with instructions specified in the configuration to perform one stage of a pipeline. No instructions may be changed while data in the thread block are passing through the PE 200. One of the configuration parameters XYZ may be used to obtain the number of executions which may be specified by ceil (XYZ/K). Here function ceil(x) returns the least integer value greater than or equal to x. The counter 206 may be programmed with the number of executions and used to count the data passing through the data output 208.1. When the counter value has reached the number of executions, a new configuration may be applied. Therefore, reconfiguration capability may be provided in each PE. In one embodiment, the specified number of executions for an instruction may be referred to as NUM_EXEC, K threads concurrently executed may be referred to as a warp and this NUM_EXEC may be equal for all components in one data path. For example, for a thread block with the number of threads TH=1024 threads, NUM_EXEC=ceil(1024/32)=32. The counter 206 may be referred to as a warp counter.
Each cache 303 may be individually coupled to all of the plurality of MIs 124.1-124.N for shared memory access via connection ports 306, and each cache 304 may be individually coupled to all of the plurality of MIs 124.1-124.N for private memory access via connection ports 308. Each of the connection ports 306 and 308 may use two subscriptions to identify its connection, with the first subscription identifying the memory cache (by the memory cache subscription 1 to J) and the second subscription identifying the MI (by the MI subscription 1 to N). For example, the connection port 306.1.1 may be for shared memory access for memory cache 303.1 and MI 124.1, the connection port 306.2.1 may be for shared memory access for memory cache 303.2 and MI 124.1, and so on until the connection port 306.J.1 may be for shared memory access for memory cache 303.J and MI 124.1; the connection port 306.1.2 may be for shared memory access for memory cache 303.1 and MI 124.2, the connection port 306.2.2 may be for shared memory access for memory cache 303.2 and MI 124.2, and so on until the connection port 306.J.2 may be for shared memory access for memory cache 303.J and MI 124.2; the connection port 306.1.N may be for shared memory access for memory cache 303.1 and MI 124.N, the connection port 306.2.N may be for shared memory access for memory cache 303.2 and MI 124.N, and so on until the connection port 306.J.N may be for shared memory access for memory cache 303.J and MI 124.N.
Similarly, the connection port 308.1.1 may be for private memory access for memory cache 304.1 and MI 124.1, the connection port 308.2.1 may be for private memory access for memory cache 304.2 and MI 124.1, and so on until the connection port 308.J.1 may be for private memory access for memory cache 304.J and MI 124.1; the connection port 308.1.2 may be for private memory access for memory cache 304.1 and MI 124.2, the connection port 308.2.2 may be for private memory access for memory cache 304.2 and MI 124.2, and so on until the connection port 308.J.2 may be for private memory access for memory cache 304.J and MI 124.2; the connection port 308.1.N may be for private memory access for memory cache 304.1 and MI 124.N, the connection port 308.2.N may be for private memory access for memory cache 304.2 and MI 124.N, and so on until the connection port 308.J.N may be for private memory access for memory cache 304.J and MI 124.N.
It should be noted that both the number of caches 303 and the number of caches 304 may match the number of memory banks, denoted by capital letter J. And the number of MIs 124 may match the number of columns, denoted by capital letter N. The number of memory banks does not need to be identical to the vector size. For example, a vector (e.g., vector ALU, vector address, vector data) may have a vector size K, a PE array may have a number of columns N, and a memory unit 300 may have a number of memory banks J. And K, N and J may be all different. In one embodiment, K may be divisible by J, J may be a power of 2, and the bit width of J minus 1 may be L (e.g., L is log2(J)). For example, J and L may be eight (8) and three (3), respectively, K may be 32 and N may also be 32.
For connections to the memory unit 300, a set of address, WData and RData buses may be coupled to one connection port 306.1.1-306.J.N and 308.1.1-308.J.N shown in
Similarly, the address port 426.1, WData port 422.1 and RData port 424.1 of MI 124.1 may be coupled to the connection port 306.1.1 of the memory unit 300; the address port 426.J, WData port 422.J and RData port 424.J of MI 124.1 may be coupled to connection port 306.J.1. Meanwhile, the address port 426.1, WData port 422.1 and RData port 424.1 of MI 124.N may be coupled to the connection port 306.1.N of the memory unit 300; the address port 426.J, WData port 422.J and RData port 424.J of MI 124.N may be coupled to connection port 306.J.N.
In one embodiment, each of the WData ports and RData ports coupled to a memory port may be configured for vector data connections. For example, the WData ports 406 and 430 may be K×M-bit input ports, and the RData ports 408 and 432 may be K×M-bit output ports.
Address ports 404 and 428 may be configured to use vector addresses. In the private memory access mode, K addresses in one vector address may be continuous in ascending order in accordance with the thread ID. Thus, only the address for the thread with the least thread ID may need to be specified by the vector address, and the width of the address port 404 may be G bits, in one embodiment. Also, assuming J is less than or equal to K, the width of data ports of each bank (e.g., 412 and 414) may be (K/J)×M bits. Since the memory unit 300 may be a J-way interleaved memory, the least significant L bits of the address may determine the memory bank where data for the address may reside. Here L may be the bit width of J minus 1. All K data in one vector data may be evenly distributed among all memory banks and accessible without memory contention.
In the shared memory access mode, K addresses in one vector address may be different from each other. Data accessed by the vector address may be randomly spread in all memory banks which may result in memory contention. The width of the address port 428 may be K×G bits. The width of data ports of each bank (e.g., 422 and 424) may be M bits. The shared memory access interface 402 may resolve the memory contention.
Because more than one address may be directed to the same memory bank, write data selection units (e.g., “Select 2” units 418.1 through 418.J) and read data selection units (e.g., “Select” units 420.1 through 420.J) may be provided to match the data being written to or read from the memory bank with the address sent to the memory bank. Each of the write data selection unit 418.1 through 418.J may receive the index of each address (e.g., 0 to K−1) sent to a corresponding address port 426.1 through 426.J from a corresponding address selection unit 416.1 through 416.J, and send one of the write data (e.g., WD_0 through WD_K−1) with the same index to WData port 422.1 through 422.J (e.g., WData port 422.1 for memory bank 0 cache 303.1, WData port 422.J for memory bank J−1 cache 303.J, etc.). For example, if the address selection unit 416.1 sends G-L bits of A_2, A_15, and A_28 to address port 426.1, the write data selection unit 418.1 receives indices 2, 15, and 28, and sends WD_2, WD_15, and WD_28 to WData port 422.1. Each of the read data selection unit 420.1 through 420.J may receive the index of each address (e.g., 0 to K−1) sent to a corresponding address port 426.1 through 426.J from a corresponding address selection unit 416.1 through 416.J, and assign the data received from RData port 424.1 through 424.J (e.g., RData port 424.1 for memory bank 0 cache 303.1, RData port 424.J for memory bank J−1 cache 303.J, etc.) to one of the read data (e.g., RD_0 through RD_K−1) with the same index. For example, if the address selection unit 416.1 sends G-L bits of A_2, A_15, and A_28 to address port 426.1, the read data selection unit 420.1 receives indices 2, 15, and 28, and assigns the data received from RData port 424.1 to RD_2, RD_15, and RD_28.
Externally, the data inputs 502.1 and 502.2 may be coupled to data outputs (e.g., read data ports) of a MP and mapped to the virtual vector registers MA and MB, respectively. One of them may be coupled to a private memory access data output and the other may be coupled to a shared memory access data output. The data output 504.1 may be coupled to a data input of a MP. The data outputs 504.2 and 504.3 may be coupled to data inputs (e.g., write data ports) of a MP. One of them may be coupled to a private memory access write data port and the other may be coupled to a shared memory access write data port. The data inputs 514.1 and 514.2 may be coupled to data outputs 208.1 and 208.2 (e.g., labeled VA and VB) of a PE (or corresponding outputs of the gasket memory in case of SB 122.1), respectively. The data inputs 524.1 through 524.8 may be coupled to data outputs 526.1 through 526.8 of a SB of a preceding column (or corresponding outputs of the gasket memory in case of SB 122.1), respectively. The data outputs of 506.1, 506.2, 508.1, 508.2, 510.1, 510.2 may be coupled to data inputs 210.1 through 210.6 of a PE, respectively. Data from the data outputs 506.1, 506.2, 508.1, 508.2, 510.1 and 510.2 may be denoted as A, B, C, D, E, and F and data from the data inputs 514.1 and 514.2 may be denoted as VA and VB. A, B, C, D, E, and F may be input data to a PE 118 and VA and VB may be output data from a PE 118 as described herein.
The SB 500 may further comprise a configuration buffer 518 and a corresponding configuration input 516. The configuration buffer 518 may be implemented as a First-In-First-Out buffer and referred to as C-FIFO 518. The configuration input 516 may be coupled externally to the configuration bus that is coupled to the sequencer 106 for the SB 500 to receive configurations from the sequencer 106. The configurations for the SB 500 may be referred to as SB configurations. Moreover, the SB 500 may further comprise a plurality of counters 520.1-520.27. With the exception of the data inputs 502.1 and 502.2, each of other data inputs and all data outputs may have a corresponding counter 520. In addition, the SB 500 may also comprise a plurality of data buffers 522.1-522.14, which may be implemented as data First-In-First-Out buffers and referred to as D-FIFO 522.1-522.14. Each of the D-FIFO 522.1-522.14 may provide a respective output buffer for each of the data outputs 506.1, 506.2, 508.1, 508.2, 510.1, 510.2 and 526.1-526.8. D-FIFO 522.7-522.14 may be mapped to the vector registers IA IB, IC, ID, IE, IF, IG, and IH, respectively.
Inside the SB 500, the data input 502.1 may be coupled to the data outputs 506.1, 506.2, 508.1, 510.1 and 526.1 through 526.8. The data input 502.2 may be coupled to the data outputs 506.1, 506.2, 508.1, 508.2, 510.2 and 526.1 through 526.8. The data input 514.1 may be coupled to the data outputs 504.1-504.3, 506.1, 506.2, 508.1, 510.1, and 526.1 through 526.8. The data input 514.2 may be coupled to the data outputs 504.2, 504.3, 506.1, 506.2, 508.1, 508.2, 510.2 and 526.1 through 526.8. Each of the data inputs 524.1, 524.3, 524.5, and 524.7 may be coupled to the data outputs 504.1-504.3, 506.1, 506.2, 508.1, 510.1, and a corresponding one of outputs 526.1, 526.3, 526.5, and 526.7. Each of the data inputs 524.2, 524.4, 524.6, and 524.8 may be coupled to the data outputs 504.1-504.3, 506.1, 506.2, 508.1, 508.2, 510.2 and a corresponding one of outputs 526.2, 526.4, 526.6, and 526.8. For example, data input 524.1 may be coupled to 504.1-504.3, 506.1, 506.2, 508.1, 510.1, and 526.1, data input 524.2 may be coupled to 504.1-504.3, 506.1, 506.2, 508.1, 508.2, 510.2 and 526.2, etc. It should be noted that inside the SB 500, the coupling between an input and an output may be switched on (e.g., connected) or off (e.g., disconnected) based on a current configuration being applied at the SB 500. Moreover, D, E, and F ports of a PE may be for 2×M-bit configuration. Only higher M bits of register pairs (e.g., VB, IB, ID, IF, IH, and MB) may be assigned to D and F, and only lower M bits of register pairs (e.g., VA, IA, IC, IE, IG, and MA) may be assigned to E.
Each of the counters 520.1-520.27 at the data ports may be independently responsible for counting data passing through the data port. When one or more configurations may be loaded into the C-FIFO 518, each configuration may specify the number of executions (e.g., NUM_EXEC). These counters may be referred to as warp counters. During execution of one configuration, all counters 520.1-520.27 may independently count the numbers of data passing through the data ports. When all the warp counters reach the number of executions specified in the configuration, the next configuration in the C-FIFO 518 may be applied.
A similar approach of using the warp counters may be applied inside a PE 118, and a memory port 120. Because these counters may facilitate configuration and reconfiguration of each component that may have such counters, these counters may be referred to as reconfiguration counters and a component that has such counters may be referred to as a reconfigurable unit. An embodiment of a processor 100 may provide massive parallel data processing using the various reconfigurable units and may be referred to as a reconfigurable parallel processor (RPP).
The input 614 may be coupled to an output of MP 120.N and the output 608 may be coupled to an input of MP 120.1. Inside the gasket memory 600, the A-FIFO 601 may be coupled between the input 614 and output 608. The inputs 616.1 and 616.2 may be coupled to outputs 208.1 and 208.2 of PE 118.N, respectively. The outputs 610.1 and 610.2 may be coupled to inputs 514.1 and 514.2 of SB 122.1. Inside the gasket memory 600, the D-FIFO 602.1 may be coupled between the input 616.1 and output 610.1, and the D-FIFO 602.2 may be coupled between the input 616.2 and output 610.2. The inputs 618.1-618.8 may be coupled to outputs 526.1-526.8 of SB 122.N, respectively. The outputs 612.1-612.8 may be coupled to inputs 524.1-524.8 of SB 122.1, respectively. Inside the gasket memory 600, the D-FIFOs 602.3-602.10 may be coupled between the inputs 618.1-618.8 and outputs 612.1-612.8, respectively.
The memory port 700 may further comprise a configuration input 734 and a configuration buffer (e.g., C-FIFO) 736. MP configurations may include instructions to be performed at a MP, for example, LOAD and STORE instructions to load data from the memory unit 112 and to store data to the memory unit 112. The memory port 700 may further comprise an address input port 716, an ALU 718, an address buffer (e.g., A-FIFO) 720, and an address output port 722. The address input port 716 may be coupled to the address output port 722 of a MP of a preceding column (or the address output port 608 of the gasket memory 600 in case of MP 120.1), and the address output port 722 may be coupled to the address input port 716 of a MP of a succeeding column (or the address input port 614 of the gasket memory 600 in case of MP 120.N). The ALU 718 may be vector of K ALUs whose width may be G bits and perform operations on the vector of addresses received from the address port 716 and the vector of data received from the Data port 724 if the data are M bits or the Data port 724 and WData port 726 if the data are G bits, and output the vector of result addresses to the address port 702. Also, the ALU 718 may output the result addresses to the address port 722 or pass the addresses received from the address port 716 to the address port 722. The A-FIFO 720 may temporarily store the addresses from the ALU 718 before the addresses being output from the address port 722. The A-FIFO 720 may be mapped to the vector register DVA.
The memory port 700 may also comprise an address calculation unit 714. The address calculation unit 714 may be configured to generate memory addresses for private memory access using a base address and thread variables. The base address may be a starting memory address for data of a block (or a grid) of threads. The thread variables may include the block parameters, such as, but not limited to, the block (or grid) dimensions. The base address and thread variables may be delivered to the MP 700 in a MP configuration.
The memory port 700 may further comprise a plurality of counters 740.1-740.8 for counting the number of executions of one configuration (e.g., NUM_EXEC). Each of the counters 740.1-740.8 may be associated with a data port or address port. Each of the counters 740.1-740.8 may be independently responsible for counting data passed the associated ports. For example, the counter 740.1 may be associated with the address port 702, the counter 740.2 may be associated with the WData port 704, the counter 740.3 may be associated with the address port 708, the counter 740.4 may be associated with the WData port 710, the counter 740.5 may be associated with the address input port 716, the counter 740.6 may be associated with the address output port 722, the counter 740.7 may be associated with the RData port 728, and the counter 740.8 may be associated with the RData port 732. The counters 740.1-740.8 may be warp counters. During execution of one configuration, all counters 740.1-740.8 may independently count the numbers of addresses or data passing through the address or data ports. When all the warp counters reach the number of executions specified in the configuration, the next configuration in the C-FIFO 736 may be applied.
The memory port 700 may further comprise a vector Multiply-Add (MAD) unit 744 and a c buffer 742 for performing multiplication and summation of MAD operations. The c buffer 742 may be a temporary storage for the input matrix Cj of the partial MAC operation and may comprise K entries of 128-bit storage units. The subscription j indicates the index of the column in the processing chain of MAC operations. In the present disclosure, the subscription j of matrices Cj, Dj, and Bj is omitted for simplicity where the discussion is limited within a single column. Each 128-bit storage unit may store a row of the matrix C which may be 16 8-bit integer numbers for integer MAD operations or 8 16-bit floating point numbers (e.g., BF16 format) for floating point MAD operations. The matrix C may be loaded from the RData port 712 to the c buffer 742. In some embodiments, the c buffer 742 may be a double buffer. That is, the c buffer 742 may include two buffers with each buffer comprising K entries of 128-bit storage units.
The vector MAD unit 744 may comprise K MAD units, with K matching the number of ALUs in the vector ALU of a PE (e.g., K=32). Therefore, the vector MAD unit 744 may perform MAD operations for K concurrent threads in a warp. The outputs from the vector MAD unit 744 may be a vector of 32-bit integer or 32-bit floating point numbers which may provide the matrix B. In some embodiments, lower and higher 16 bits of the outputs may be coupled to the multiplexer 746 and 748 as inputs to the multiplexer 746 and 748, respectively. In order to load data from the memory unit 112 to the switch box, the multiplexers 746 and 748 may have other inputs coupled from RData ports 706 and 712, respectively, and select inputs from either the vector MAD unit or the memory unit 112. The outputs of the multiplexers 746 and 748 may be coupled to RData ports 728 and 732, respectively, so that the outputs of the vector MAD unit 744 may be selected by the multiplexers 746 and 748 and transferred to the PE for accumulation.
In the embodiment shown in
The size of the matrix C may be 32×128 bits. The matrix C may be loaded from the memory unit 112 and stored in 32 storage units of a c buffer (e.g., 742.1 or 742.2). It may take 8 cycles to load the matrix C from the memory unit 112 using a 512-bit data bus. To maximize the throughput, the buffer 742 may be double buffered. The matrix D 750 may be a 128-bit vector that may include 16 8-bit integer numbers for integer MAD operations (e.g., d0 through d15, with each of d0 through d15 being an 8-bit integer number) and a common input to all MAD units of the vector MAD unit 744. In one embodiment, 128 bits of a 512-bit data bus may be used to load the matrix D 750. Because the matrix D 750 may be loaded and directly supplied to each MAD unit of the vector MAD unit 744, there is no need to buffer the matrix D 750. The output 758 from the vector MAD unit 744 may be a matric B which is a vector of 32 elements of 32-bit data denoted as b0 through b31 in
It should be noted that the actual number of columns of the matrix C or rows of the matrix D may be programmable and may be provided in the instruction. In one embodiment, if the number of elements specified is less than 16 for integer data type or 8 for floating point data type, the vector MAD unit 744 may assign zeros to missing elements of matrix D.
The MAD unit 752 illustrated in
If number of bits W of a row of the matric C and a column of the matrix D is more than 128 bits (e.g., 16 elements of 8-bit data for integer data type or 8 elements of 16-bit data for floating point data type), both matrices C and D may be segmented into S matrices Cj and
where the bit width of a row of the matrix Cj and a column of the matrix Dj is 128. The matrices Cj and Dj may be loaded into the respective MP of S consecutive columns as inputs of partial MAD operations. The partial MAD operation result of the MP that is the matrix Bj may be forwarded to the PE of the same column. The PE may contain 32 ALUs which may perform accumulation by adding the matrix Bj from the MP of the same column to the accumulation results from the preceding column, with the PE of the first column of S consecutive columns adding zero to the matrix B0 from the MP of the first column.
In various embodiments, the execution of MAC operations may be repeated until all threads are processed. For example, if there are 64 threads in total, with each MP/PE performing 32 threads in one warp, 64 threads may need two warps to complete all MAC operations.
The sequencer 800 may be coupled to a task buffer (e.g., task FIFO) 816 and an instruction cache (e.g., i-Cache) 818. Information of a kernel such as base address of the program, job identifier (e.g., job ID), block identifier (e.g., block ID) and block indices may be transferred to the sequencer 800 via the task buffer 816. In one embodiment, the task buffer 816 and the instruction cache 818 may be part of the configuration memory 104 of
In some embodiments, the kernel information may include a bit to indicate whether the sequencer 802 should work in a continuous mode. If the bit is set, the sequencer 800 may continuously read kernel information from the task buffer 816 and fetch kernel programs. Otherwise, the sequencer 800 may monitor status of columns (e.g., of PE array 114) and wait until all columns become inactive before reading the next kernel information from the task buffer 816.
The sequencer 800 may comprise a controller 802, an instruction buffer 804, a scalar instruction decoder 806, a vector instruction decoder 808, a scalar processor 810, a local memory 812, and a scalar register file 814. The controller 802 may fetch instructions from the instruction cache 818 and put the fetched instructions in the instruction buffer 804. In one embodiment, the instruction buffer 804 may be a circular buffer to hold a number of instructions (e.g., 64 or another number). During operation, for example, the controller 802 may fetch 64 instructions at the beginning of a kernel to fill the instruction buffer 804.
The fetched instructions may include scalar instructions and vector instructions mixed together. Certain bits of an instruction (e.g., the most significant 4 bits, the least significant four bits, or other bits) may specify a hardware unit designated to execute the instruction. The controller 802 may examine these bits and determine whether an instruction is a scalar instruction or a vector instruction based on the designated hardware unit.
The instruction buffer 804 may have an instruction pointer pointing to an instruction in the instruction buffer 804 as the next instruction to be processed. The next instruction to be processed may also be pointed to by a Program Counter (PC) 820 in the controller 802. The controller 802 may determine whether an instruction is a scalar instruction or a vector instruction, and direct scalar instructions to be sent to the scalar instruction decoder 806 and vector instructions to be sent to the vector instruction decoder 808. In some embodiments, the scalar instruction decoder 806 may decode one scalar instruction in one cycle and the vector instruction decoder 808 may decode a plurality of vector instructions in one cycle. For example, in one embodiment, the vector instruction decoder 808 may decode up to 8 vector instructions in one cycle. However, if a vector instruction refers one or more registers in the scalar register file 814 and the one or more registers are not ready yet, wait cycles may be inserted. The PC 820 may be incremented by one when the scalar instruction decoder 806 decodes a scalar instruction. When the vector instruction decoder 808 decodes vector instructions, the PC 820 may be incremented by the number of decoded vector instructions.
The sequencer 800 may sequentially process instructions in the order stored in the instruction cache 818. Scalar instructions decoded by the scalar instruction decoder 806 may be executed on the fly by controller 802 and the scalar processor 810. Scalar instructions may generate parameters used to configure columns and manage loops and branches. The vector instruction decoder 808 may decode vector instructions to generate configurations for vector processing units. Vector instructions may configure data paths in columns, control data flow and process data in parallel threads. For example, vector instructions for the memory ports (e.g., MPs 120) may include memory access instructions, such as but not limited to, LOAD and STORE; vector instructions for the switch boxes (e.g., SBs 122) may include data copy instructions, such as but not limited to, MOVE and FORWARD; and vector instructions for the processing elements (e.g., PEs 118) may include arithmetic and logical instructions, such as but not limited to, ADD and SUBTRACT, etc.
In some embodiments, configurations may be broadcasted to all columns. One of the columns may be a destination or target column for one set of configurations, each column may have a separate single line coupled to the sequencer for transmitting a valid bit. The scalar instruction decoder 806 may assert the valid bit to select a target column when the configuration buffer of the target column is not full. That is, when the configuration buffer of the target column (e.g., configuration buffers of MP, PE, SB of the target column) has available space, the valid bit for the target column may be asserted for the configurations to be received by the target column.
The size of configuration buffer for the vector processing units may be larger than the size of one configuration so that there is no need for the sequencer 800 and columns to be synchronized. That is, each of the vector processing units may hold more than one configuration at any time and each column of vector processing units may execute decoded vector instructions in an asynchronous manner with respect to the sequencer 800. Therefore, the sequencer 800 may complete configuration dispatch before columns complete program execution. In at least one embodiment, the sequencer 800 may monitor whether columns are active or inactive but does not monitor which instructions columns are executing.
The destination column may be sequentially selected one at a time in the order of column number. Because the columns may be chained in a loop (e.g., MPs chained from MP 120.1 through 120.N then back to MP 120.1 via the gasket memory 116, SBs and PEs chained from SB 122.1 to PE 118.1 through SB 122.N to PE 118.N then back to SB 122.1 via the gasket memory 116), execution of a kernel program may select any column as a starting column. In one embodiment, the first column of the PE array 114 (e.g., the MP 120.1, SB 122.1, and PE 118.1) may be selected to start execution of a kernel program and other columns may be sequentially selected one at a time in the order of column number.
Kernel specific parameters may be set in the local memory 812 by external devices before the kernel is initiated. The local memory 812 may be Random Access Memory (RAM) (e.g., DRAM, SRAM, etc.). While executing some scalar instructions, the scalar processor 810 may read these parameters from local memory 812, process the parameters, and store the parameters to the scalar registers of the scalar register file 814. The scalar register file 814 may be shared by the scalar processor 810 and vector instruction decoder 808. The vector instruction decoder 808 may obtain the kernel specific parameters from the scalar register file 814 and deliver them to columns as immediate values in configurations. In addition, parameters (e.g., used to configure columns) generated by the scalar processor 810 executing scalar instructions may also be passed over to the vector instruction decoder 808 using the scalar register file 814. In some embodiments, the scalar register file 814 may comprise a plurality of scalar registers. For example, in one embodiment, the scalar register file 814 may comprise 32 16-bit scalar registers denoted as R0 to R31.
The scalar processor 810 may comprise a scalar ALU and a Load/Store Unit. In one embodiment, the ALU may include an integer unit, a floating-point unit, a move unit, and a compare unit. Each of these units may be implemented in a multi-stage pipeline. The Load/Store Unit may also be implemented as multi-stage pipelines. The Load Unit may read data from the local memory 812 and store them to the scalar registers of the scalar register file 814. The Store Unit may write contents of the scalar registers of the register file 814 to the local memory 812.
The scalar instructions decoded by the scalar instruction decoder 806 may include control flow instructions that may be executed by the controller 802. In some embodiments, the control flow instructions may include, but not limited to, repeat, jump, poll, and barrier instructions. A jump instruction is to change the execution flow from the next instruction in a current sequence of instructions to a destination instruction pointed by the jump instruction. A poll instruction is to let the controller 802 stop fetching instructions and wait until the DMA operation is done (e.g., DMA module 102 finishes). The poll instruction may synchronize the sequencer 800 and columns. When a barrier instruction is executed, the controller 802 may stop fetching instructions and wait until all columns become inactive. The barrier instruction may synchronize the sequencer 800 and columns.
In some embodiments, the controller 802 may comprise separate hardware units each configured to execute a different type of instruction assigned to the controller 802. For example, the controller 802 may comprise a hardware unit for executing the jump instructions, another hardware unit for executing the poll instructions and yet another hardware unit for executing the barrier instructions.
For repeat instructions, the controller 802 may be equipped with a PC register 822, a column counter 824, a loop counter 826, and a column number register 828. In one embodiment, these hardware units may be contained in one hardware unit designated for repeat instructions. When a repeat instruction is executed, the controller 802 may capture the current value of the PC 820 in the PC register 822, clear the column counter 824, set the loop counter 826 to the total number of times the instruction to be repeatedly executed, and set the column number register 828 to a number of columns to be repeated. The latter 2 values may be provided by the repeat instruction. The controller 802 may increment the column counter 824 when configurations are sent to a column. If the column counter value in the column counter 824 is equal to the column number register value in the column number register 828, the controller 802 may clear the column counter 824 and decrement the loop counter 826. If the loop counter value in the loop counter 826 is not zero, the controller 802 may replace the value in the PC 820 with the PC register value in the PC register 822 to jump back to the start point of the loop. If the loop counter value is zero, the controller 802 may exit the loop.
In one example, the four columns may perform 64 MAC operations for a 4×16 filter with integer data type. Multiple filters (e.g., 32 filters given by each row of the matrix C and mapped to 32 respective threads) may be applied to a same image (e.g., same matrix D) to extract different features as in image processing in artificial neural networks.
A pseudo assembly code to describe the MAC operation chain in
“#” on lines 1, 6 and 13 may indicate a boundary of columns. All instructions between the two “#” may be executed in one column. “%” may indicate a scalar register which may be mapped to a scalar register of the scalar register file 814. Scalar registers % Ac, % AcOffset and % AdOffset in the pseudo assembly code may be initialized by the sequencer 106 at the beginning of the kernel program. With each MP having K MAD units and each PE having K ALUs, and K being 32, MAB and VAB may be vector registers of 32×32 bits. The MP and PE in a column may process 32 threads at a time and repeats the execution until all threads are processed.
Lines 2 to 5 may perform the MAC operations in the first column 902. Line 2 may be executed by the MP of column 902 and may load the matrix C0 to the c buffer. “INT” may indicate the type of the matrix C0 being integer type. “16” may indicate the number of columns of the matrix C0 being 16. The scalar register % Ac may provide the base address of the matrix C0. The sequencer 106 may decode the instruction and obtain the base address of the matrix C0 from the scalar register. In one embodiment, elements of the matrix C0 may be stored in consecutive addresses, and % Ac is the address of the first element.
Line 3 may be executed by the MP of column 902, and may load the matrix D0 and perform the MAD operations in the MP of column 902. The results of the MAD operations may be assigned to the vector register MAB. “INT” may indicate the type of elements of the matrix D0 being integer data type. “16” may indicate the number of rows of the matrix D0 being 16. Addresses of all elements of the matrix D0 may be provided by the vector register DVA which may receive the addresses from the preceding column.
Line 4 may be executed by the vector ALU in the MP of column 902, increment all addresses in the vector register DVA by the scalar register % AdOffset, and forward the incremented addresses to the succeeding column.
Line 5 may be executed in the PE of column 902 and may perform the accumulation by adding 0 to the vector register MAB. The result may be assigned to the vector register VAB.
Line 7 may be executed in the sequencer 106 of the processor 100 and may create a loop to repeat decoding instructions for a column (lines 8 to 12) 3 times. For each repetition, a set of configurations may be generated and dispatched to one of consecutive 3 columns. In Line 7, the first and second operands of REPEAT instruction may indicate the number of columns contained in the loop and the number of repetitions, respectively.
Line 8 may be executed in the sequencer 106 of the processor 100 and may increment the base address of the matrix Ci(i=1,2,3) (e.g., i=1 for column 904, i=2 for column 906, and i=3 for column 908) by the scalar register % AcOffset.
Line 9 may be executed in the MP of columns 904, 906 and 908 and may load the matrix Ci to the c buffer (same as line 2) of the MP of columns 904, 906 and 908.
Line 10 may be executed in the MP of columns 904, 906 and 908, and may load the matrix Di and perform the MAD operations (same as line 3).
Line 11 may be executed by the vector ALU in each MP of columns 904, 906 and 908, increment all addresses in the vector register DVA by the scalar register % AdOffset, and forward the incremented addresses to the succeeding column.
Line 12 may be executed by the vector ALU in the PE of columns 904, 906 and 908, and may perform the accumulation by adding the vector register VAB from the preceding column and the vector register MAB of the current column. The result may be assigned to the vector register VAB of the current column.
This example pseudo assembly code demonstrates that computation intensive 32 parallel 16 MAC operations may be programed by only 3 instructions, LOADVEC, LOADLANE, and ADD.
In summary, the processor 100 may provide hardware supported MAC operation for integer and floating point data types. MAD units may be implemented in the Memory Ports (MPs) to ease programming effort to describe computation intensive MAC operations. The advantages of hardware supported MAC operation may include: 1) up to 16 (or 8) MAD operations are absorbed into a single load instruction, 2) no special MAC instruction is required for PE and results of MAD operation are transferred to PE in the same way as data loaded from memory, 3) three instructions may program up to 16 for integer data type (or 8 for floating point data type) MAC operations per thread, and 4) MAC operations more than 16 for integer data type (or 8 for floating point data type) may be programed as a chain of MAC operations described as a loop of the 3 instructions.
At block 1004, a second matrix may be loaded into the memory port. For example, the second matrix may be an input matrix D for MAD operations and a common input shared by the plurality of MAD units of the vector MAD unit.
At block 1006, in each MAD unit of the vector MAD unit, a respective set of multiplication results may be generated by multiplying elements of a respective row of the first matrix and corresponding elements of the second matrix. For example, as shown in
At block 1008, a vector of MAD results may be generated. Each MAD result may be generated by adding all multiplication results in a respective set of multiplication results in a respective MAD unit of the vector MAD unit. For example, as shown in
At block 1010, the vector of MAD results may be output from the memory port as a vector input to a vector ALU of a processing element of the column of vector processing units. For example, as shown in
The present disclosure provides apparatus, systems and methods for reconfigurable parallel processor (RPP). For example, an embodiment of a RPP may utilize a processing element (PE) array as a physical data path to process massive parallel data. The physical data path may be made identical in each section (e.g., one column of a MP, a SB and a PE), which may allow the dependency graph of a kernel program to be mapped to a virtual data path that may be an infinite repetition of the physical data path. Moreover, the scalar instructions of the kernel program may be executed by the sequencer without generating any configurations for a vector processing unit to execute the scalar instructions. Furthermore, scalar control flow instructions may also be executed by the sequencer instead of wasting any vector processing unit cycles.
An embodiment of a RPP may also utilize a gasket memory to temporally store outputs of the physical data path (e.g., processing element (PE) array) which is configured by one segment of the virtual data path which consists of N vector execution nodes. The gasket memory may function as a data buffer (e.g., FIFO) which feeds data back into the physical data path when the physical data path is reconfigured by the next segment of the virtual data path.
An embodiment of a RPP may also have a memory unit with memory interfaces (MIs) connected to each column (e.g., a MP, a SB and a PE) of the physical data path. All data accessed throughout the virtual data path may be stored in the memory unit. For each segment of the virtual data path, a MP may be reconfigured to access the memory unit differently while the data could stay the same.
Embodiments of a RPP may be optimized to allow massive parallelism for Single Instruction Multiple Threads (SIMT) processing. In one example, with one row of 32 PEs and each PE having 32 Arithmetic Logic Units (ALUs), 1024 ALUs may be included in one RPP core. In some embodiments, a multi-core processor may comprise multiple RPPs.
Embodiments of a RPP may be reconfigured according to a reconfiguration mechanism. The various components of a RPP that include one or more reconfiguration counters may be referred to as reconfigurable units. For example, each of the PEs (e.g., PE 118), the switch boxes (e.g., SB 122) and the memory ports (e.g., MP 120), may comprise one or more reconfiguration counters, such as the counter 206 in a PE, the counters 520 in a SB, the counters 740 in a MP. Data processing may be pipelined when there may be no dependency between threads. Identical instructions may be executed multiple times until all threads (e.g., 32 warps of threads for one reconfigurable unit if the total number of threads is 1024) are processed. In various embodiments, when counters in a reconfigurable unit reach a programmed number (e.g., NUM_EXEC), the reconfigurable unit may replace its configuration to a new context. This reconfiguration may be done in the same way in each PE, SB and MP. Self-reconfiguration may be achieved with minimum idle time for switching.
The exemplary reconfiguration mechanism may reduce the power spent on configuration because the configuration is only switched once after all threads have been processed. This may also reduce idle time between configurations by switching each reconfigurable unit independently at its earliest time.
In some embodiments, all warps may load the same data using the same addresses in the shared memory access mode. Due to the pipelined nature of operation, only the data load instruction for the first warp may need to be performed. The data loaded may be shared with other warps to reduce the memory access traffic and power consumption.
In an exemplary embodiment, a processor may comprise a memory unit and a plurality of columns of vector processing units coupled to the memory unit. Each column of the vector processing units may comprise a memory port (MP) configured for vector data access operations and a processing element (PE) having a vector Arithmetic Logic Unit (ALU) configured for vector data processing. The MP of each column of the vector processing units may comprise: an input buffer to store a first matrix loaded from the memory unit and a vector multiply-add (MAD) unit that contains a plurality of MAD units. Each MAD unit of the plurality of MAD units may comprise: a first input coupled to one of storage units of the input buffer holding a row of the first matrix, a second input coupled to the memory unit to load a second matrix from the memory unit, and a plurality of multipliers and a multiple-input adder. The plurality of multipliers may be configured to generate a set of multiplication results with each multiplication result being generated by multiplying an element of a row of the first matrix and a corresponding element of the second matrix. The vector MAD unit may be configured to generate a vector of MAD results with each multiple-input adder configured to generate a MAD result of the vector of MAD results by adding all multiplication results of the set of multiplication results generated by the plurality of multipliers of a same MAD unit as the respective multiple-input adder. The MP may be configured to output the vector of MAD results as a vector input to the vector ALU of a PE of a same column as the MP.
In an embodiment, the input buffer may be a double buffer to store two first matrices.
In an embodiment, each first matrix may be a 32×16 matrix of 8-bit integer or 32×8 matrix of 16-bit floating point and loading the first matrix from the memory unit may take 8 cycles using a 512-bit data bus.
In an embodiment, the second matrix may be a 16×1 matrix of 8-bit integer or 8×1 matrix of 16-bit floating point, and the second matrix may be loaded from the memory unit using 128 bits of a 512-bit data bus.
In an embodiment, a set of consecutive columns of the plurality of columns of vector processing units may be configured to perform Multiply-Accumulate (MAC) operations by passing multiply-add results from MPs of the consecutive columns to PEs of respective columns and accumulating multiply-add results by the PEs.
In an embodiment, the PE of a first column of the set of consecutive columns may add a zero to the multiply-add results generated by the MP of the first column, and pass the results of the addition to the PE of a succeeding column of the set of consecutive columns to be added to the multiply-add results generated by the MP of the succeeding column.
In an embodiment, the first matrix may be loaded from the memory unit according to a first instruction and the second matrix may be loaded from the memory unit according to a second instruction. The second instruction also may cause Multiply-Add (MAD) operations to be performed to generate the vector of MAD results.
In an embodiment, the plurality of multipliers and a multiple-input adder of each MAD unit may be shared for Multiply-Add (MAD) calculations of integer and floating point data types.
In an embodiment, each of the plurality of multipliers of each MAD unit may be configured to multiply either two 8-bit integer or 16-bit floating point numbers to generate a 16-bit integer or 32-bit floating point multiplication result and the multiple-input adder of each MAD unit may be configured to add either 16 16-bit integer or 8 32-bit floating point numbers to generate a 32-bit integer or floating point summation result.
In an embodiment, when the number of columns of the first matrix or rows of the second matrix specified in the instructions may be less than 16 for integer data type or 8 for floating point data type, the vector MAD unit may assign zeros to missing elements of the second matrix.
In another exemplary embodiment, a method may comprise: loading a first matrix into a buffer of a memory port of a column of vector processing units, loading a second matrix into the memory port. The buffer may be an input buffer to a vector multiply-add (MAD) unit that contains a plurality of MAD units, each row of the first matrix may be a first input to a respective MAD unit of the vector MAD unit, and the second matrix may be a common second input to each MAD unit of the vector MAD unit. The method may further comprise generating, in each MAD unit of the vector MAD unit, a respective set of multiplication results by multiplying elements of a respective row of the first matrix and corresponding elements of the second matrix, generating a vector of MAD results, and outputting the vector of MAD results from the memory port as a vector input to a vector Arithmetic Logic Unit (ALU) of a processing element of the column of vector processing units. Each MAD result may be generated by adding all multiplication results in a respective set of multiplication results in a respective MAD unit of the vector MAD unit.
In an embodiment, the input buffer may be a double buffer to store two first matrices.
In an embodiment, the first matrix may be a 32×16 matrix of 8-bit integer or 32×8 matrix of 16-bit floating point and loading the first matrix may take 8 cycles using a 512-bit data bus.
In an embodiment, the second matrix may be a 16×1 matrix of 8-bit integer or 8×1 matrix of 16-bit floating point, and the second matrix may be loaded from the memory unit using 128 bits of a 512-bit data bus.
In an embodiment, the method may further comprise performing Multiply-Accumulate (MAC) operations using a set of consecutive columns of the plurality of columns of vector processing units by passing multiply-add results from MPs of the consecutive columns to PEs of respective columns and accumulating multiply-add results by the PEs.
In an embodiment, the PE of a first column of the set of consecutive columns may add a zero to the multiply-add results generated by the MP of the first column, and pass the results of the addition to the PE of a succeeding column of the set of consecutive columns to be added to the multiply-add results generated by the MP of the succeeding column.
In an embodiment, the first matrix may be loaded from the memory unit according to a first instruction and the second matrix may be loaded from the memory unit according to a second instruction. The second instruction may also cause Multiply-Add (MAD) operations to be performed to generate the vector of MAD results.
In an embodiment, the plurality of multipliers and a multiple-input adder of each MAD unit may be shared for Multiply-Add (MAD) calculations of integer and floating point data types.
In an embodiment, each of the plurality of multipliers of each MAD unit may be configured to multiply either two 8-bit integer or 16-bit floating point numbers to generate a 16-bit integer or 32-bit floating point multiplication result and each multiple-input adder of each MAD unit may be configured to add either 16 16-bit integer or 8 32-bit floating point numbers to generate a 32-bit integer or floating point summation result.
In an embodiment, when the number of columns of the first matrix or rows of the second matrix specified in the instructions may be less than 16 for integer data type or 8 for floating point data type, the vector MAD unit may assign zeros to missing elements of the second matrix.
The techniques described herein may be implemented in one or more application specific integrated circuits (ASICs) in digital logic gates, or by a processor that executes instructions stored in a tangible processor readable memory storage media.
In one embodiment, any of the disclosed methods and operations may be implemented in software comprising computer-executable instructions stored on one or more computer-readable storage media. The one or more computer-readable storage media may include non-transitory computer-readable media (such as removable or non-removable magnetic disks, magnetic tapes or cassettes, solid state drives (SSDs), hybrid hard drives, CD-ROMs, CD-RWs, DVDs, or any other tangible storage medium), volatile memory components (such as DRAM or SRAM), or nonvolatile memory components (such as hard drives)). The computer-executable instructions may be executed on a processor (e.g., a microcontroller, a microprocessor, a digital signal processor, etc.). Moreover, an embodiment of the present disclosure may be used as a general-purpose processor, a graphics processor, a microcontroller, a microprocessor, or a digital signal processor.
It should be noted that as used herein, a “coupling” and a “connection” between two components, such as one component being “coupled” or “connected” to another component may refer to an electronic connection between two components, which may include but not limited to, by electronic wiring, through an electronic element (e.g., a resistor, a transistor), etc.
While various aspects and embodiments have been disclosed herein, other aspects and embodiments will be apparent to those skilled in the art. The various aspects and embodiments disclosed herein are for purposes of illustration and are not intended to be limiting, with the true scope and spirit being indicated by the following claims.