The present disclosure relates to learning and processing tensors, and more specifically to hardware architecture that is efficient at performing operations related to sparse tensors.
The use of artificial neural networks (ANN), or simply neural networks, includes a vast array of technologies. An ANN's complexity, in terms of the number of parameters, is growing exponentially at a faster rate than hardware performance. In many cases, an ANN may have a large number of parameters. Training and inference on these networks are bottlenecked by massive linear tensor operations, multiplication and convolution. Consequently, a large amount of time and/or resources may be used for both ANN creation (e.g., training) and execution (e.g., inference).
Computing systems that execute ANNs often involve extensive computing operations including multiplication and accumulation. For example, CNN is a class of machine learning techniques that primarily uses convolution between input data and kernel data, which can be decomposed into multiplication and accumulation operations. Using a central processing unit (CPU) and its main memory to instantiate and execute machine learning systems or models of various configurations is relatively easy because such systems or models can be instantiated with mere updates to code. However, relying solely on the CPU for various operations of these machine learning systems or models would consume significant bandwidth of a central processing unit (CPU) as well as increase the overall power consumption.
Embodiments relate to an accelerator for performing operations on tensors. The accelerator may include a plurality of multiply circuits configured to perform multiplications between values in a process tensor and values in an activation tensor to generate a plurality of products. The values in the process tensor are associated with tensor identifiers. The accelerator may also include a routing circuit configured to carry over the tensor identifiers of the values in the process tensor to the plurality of products and divide the plurality of products into subsets based on the tensor identifiers. The accelerator may also include a plurality of adder trees coupled to the routing circuit. Each adder tree is configured to receive a subset of the products that are grouped based on the tensor identifiers and accumulate the subset of the products to generate an output value. The plurality of adder trees is configured to generate a plurality of output values. The accelerator may further include an activation circuit coupled to the plurality of adder trees. The activation circuit is configured to select a subset of the output values as winners of an activation selection and set remaining of the plurality of output values as zero.
In some embodiments, the activation circuit is further configured to boost one or more output values of the plurality of output values before the activation selection.
In some embodiments, the one or more output values that are boosted correspond to one or more nodes that are set to zero in a previous cycle of operation.
In some embodiments, the activation circuit is configured to select K output values as a number of output values in the subset that are selected as the winners and each of the tensor identifiers is used to identify one of the sparse process tensors.
In some embodiments, the process tensor is a complementary dense process tensor that is combined from a plurality of sparse process tensors.
In some embodiments, the routing circuit includes an arbiter circuit that controls routing of a product of the plurality of products to one of the adder trees.
In some embodiments, the plurality of output values correspond to a plurality of channel dimension of the activation tensor.
In some embodiments, the activation circuit includes a histogram memory that is configured to build a histogram that represents a distribution of the plurality of output values.
In some embodiments, the activation circuit includes a sorting circuit configured to select the winners from serial bursts of the output values.
In some embodiments, the activation circuit includes a sorting circuit configured to select the winners from the plurality of output values in parallel.
In some embodiments, a computer-implemented method for operating on tensors may include combining a plurality of sparse process tensors to a complementary dense process tensor. The plurality of sparse process tensors have non-overlapping locations of active values. The method may also include performing computations between the complementary dense process tensor and an activation tensor to generate a plurality of products. The method may further include separating the plurality of products into groups, each group corresponding to one of the sparse process tensors.
In some embodiments, a distribution of the active values in at least one of the sparse process tensors are partitioned.
In some embodiments, the computations between the complementary dense process tensor and the activation tensor are performed by elementwise multiplications between values in the complementary dense process tensor and values in the activation tensor.
In some embodiments, separating the plurality of products into groups includes a pre-multiplication re-arrangement of the activation tensor.
In some embodiments, separating the plurality of products into groups includes a post-multiplication re-arrangement of the plurality of products.
In some embodiments, the method may further include accumulating the groups of products to generate a plurality of accumulated values, each accumulated value corresponding to one of the sparse process tensors.
In some embodiments, the method may further include selecting a subset of the plurality of accumulated values as winners of an activation selection of the sparse neural network; and setting remaining of the plurality of accumulated values as zero.
In some embodiments, separating the plurality of products into groups includes flattening the plurality of products in a form of a tensor into a one-dimensional array and re-arranging the one-dimensional array to the groups of products corresponding to the sparse process tensors.
In some embodiments, the plurality of sparse process tensors corresponds to a plurality of nodes of the sparse neural network.
In some embodiments, the method may further include combining a second plurality of sparse process tensors to a second complementary dense process tensor, wherein the plurality of sparse process tensors and the second plurality of sparse process tensors both correspond to nodes in a layer of the sparse neural network.
In some embodiments, an accelerator for performing operations on tensors may include a memory configured to store a complementary dense process tensor. The complementary dense process tensor may be generated from combining a plurality of sparse process tensors that have non-overlapping locations of active values. The accelerator may also include a computation core coupled to the memory. The computation core is configured to perform computations between two or more tensors to generate a product tensor. The two or more tensors include the complementary dense process tensor. The computation core may include a permutation circuit configured to re-arrange values in one of the two or more tensors or in the product tensor to group the values corresponding to one of the sparse process tensors together.
In some embodiments, the computation core may also include a multiply circuit configured to perform multiplications between two or more tensors; and an adder tree configured to accumulate the values corresponding to the one of the sparse process tensors.
In some embodiments, the permutation circuit is located upstream of the multiply circuit.
In some embodiments, the permutation circuit is located downstream of the multiply circuit.
In some embodiments, the permutation circuit is configured to re-arrange the values in an activation tensor, the activation tensor being one of the two or more tensors.
In some embodiments, the permutation circuit is configured to re-arrange the values in the product tensor.
In some embodiments, the active values in the plurality of sparse process tensors are partitioned, and the permutation circuit includes multiple permutation networks, each of the premutation networks is configured to re-arrange the values correspond a partition.
In some embodiments, the permutation circuit includes a network of switches.
In some embodiments, the values corresponding to the one of the sparse process tensors have the same tensor identifier and the permutation circuit is configured to group the values corresponding to the one of the sparse process tensors based on the tensor identifier.
In some embodiments, the accelerator may further include an activation circuit configured to select k winners of outputs of the computation core as values in an output activation tensor.
The features and advantages described in the specification are not all inclusive and, in particular, many additional features and advantages will be apparent to one of ordinary skill in the art in view of the drawings and specification. Moreover, it should be noted that the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter.
The teachings of the embodiments of the present invention can be readily understood by considering the following detailed description in conjunction with the accompanying drawings.
In the following description of embodiments, numerous specific details are set forth in order to provide more thorough understanding. However, note that the present invention may be practiced without one or more of these specific details. In other instances, well-known features have not been described in detail to avoid unnecessarily complicating the description.
A preferred embodiment is now described with reference to the figures where like reference numbers indicate identical or functionally similar elements. Also in the figures, the left-most digit of each reference number corresponds to the figure in which the reference number is first used.
Reference in the specification to “one embodiment” or to “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiments is included in at least one embodiment. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment.
Some portions of the detailed description that follows are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the means used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of steps (instructions) leading to the desired result. The steps are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical, magnetic or optical signals capable of being stored, transferred, combined, compared and otherwise manipulated. It is convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like. Furthermore, it is also convenient at times, to refer to certain arrangements of steps requiring physical manipulations of physical quantities as modules or code devices, without loss of generality.
However, all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the following discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or “determining” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Certain aspects of the embodiments include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the embodiments could be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems.
Embodiments also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus. A computer readable medium is a non-transitory medium that does not include propagation signals and transient waves. Furthermore, the computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability. Various embodiments described may also be implemented as field-programmable gate arrays (FPGAs), which include hardware programmable devices that accept programming commands to execute the processing of input data.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may also be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method steps. The required structure for a variety of these systems will appear from the description below. In addition, embodiments are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings as described herein, and any references below to specific languages are provided for disclosure of enablement and best mode of the embodiments.
In addition, the language used in the specification has been principally selected for readability and instructional purposes, and may not have been selected to delineate or circumscribe the inventive subject matter. Accordingly, the disclosure set forth herein is intended to be illustrative, but not limiting, of the scope, which is set forth in the claims.
Embodiments relate to architecture of an accelerator that is efficient at processing tensors associated with a sparse node. A sparse node may include a sparse tensor that has a low density of active values. In using a generic processor, the computation operation of a tensor, sparse or dense, may include computing the value in the tensor one by one. However, in a sparse tensor, since many values in the tensor are inactive (e.g., zeros) and computation with such inactive values can be skipped, the accelerator may determine the locations of active values in the tensor and perform computation efficiently so that the number of operations to process the tensor is reduced. In some embodiments, since the tensors may have a high degree of sparsity, the sparse tensors may be combined into a dense tensor so that computations of multiple tensors may be carried out in a single set of operations. The distribution of the active values in the sparse tensors may be arranged such that the active values among the sparse tensors to be combined are non-overlapping. The combined dense tensor may be referred to as a complementary dense tensor. Circuitry that improves the routing and re-arrangement of elements may be used to improve the efficiency in grouping and separating the active values back to sparse tensors after the combination.
While some of the components in this disclosure may at times be described in a singular form while other components may be described in a plural form, various components described in any system may include one or more copies of the components. For example, a computing device 100 may include more than one processor such as CPU 102, accelerator 104, and GPU 106, but the disclosure may refer the processors to as “a processor” or “the processor.” Also, a processor may include multiple cores.
CPU 102 may be a general-purpose processor using any appropriate architecture. CPU 102 retrieves and executes computer code that includes instructions, when executed, that may cause CPU 102 or another processor, individually or in combination, to perform certain actions or processes that are described in this disclosure. Instructions can be any directions, commands, or orders that may be stored in different forms, such as equipment-readable instructions, programming instructions including source code, and other communication signals and orders. Instructions may be used in a general sense and are not limited to machine-readable codes. CPU 102 may be used to compile the instructions and also determine which processors may be used to performed certain tasks based on the commands in the instructions. For example, certain machine learning computations may be more efficient to be processed using accelerator 104 while other parallel computations may be better to be processed using GPU 106.
Accelerator 104 may be a processor that is efficient at performing certain machine learning operations such as tensor multiplications, convolutions, tensor dot products, etc. In various embodiments, accelerator 104 may have different hardware architectures. For example, in one embodiment, accelerator 104 may take the form of field-programmable gate arrays (FPGAs). In another embodiment, accelerator 104 may take the form of application-specific integrated circuits (ASICs), which may include circuits along or circuits in combination with firmware.
GPU 106 may be a processor that includes highly parallel structures that are more efficient than CPU 102 at processing large blocks of data in parallel. GPU 106 may be used to process graphical data and accelerate certain graphical operations. In some cases, owing to its parallel nature, GPU 106 may also be used to process a large number of machine learning operations in parallel. GPU 106 is often efficient at performing the same type of workload many times in rapid succession.
While, in
System memory 108 includes circuitry for storing instructions for execution by a processor and for storing data processed by the processor. System memory 180 may take the form of any type of memory structure including, for example, dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) RAMBUS DRAM (RDRAM), static RAM (SRAM) or a combination thereof. System memory 108 usually takes the form of volatile memory.
Storage unit 110 may be a persistent storage for storing data and software applications in a non-volatile manner. Storage unit 110 may take the form of read-only memory (ROM), hard drive, flash memory, or another type of non-volatile memory device. Storage unit 110 stores the operating system of the computing device 100, various software applications 130 and machine learning models 140. Storage unit 110 may store computer code that includes instructions that, when executed, cause a processor to perform one or more processes described in this disclosure.
Applications 130 may be any suitable software applications that operate at the computing device 100. An application 130 may be in communication with other devices via network interface 118. Applications 130 may be of different types. In one case, an application 130 may be a web application, such as an application that runs on JavaScript. In another case, an application 130 may be a mobile application. For example, the mobile application may run on Swift for iOS and other APPLE operating systems or on Java or another suitable language for ANDROID systems. In yet another case, an application 130 may be a software program that operates on a desktop operating system such as LINUX, MICROSOFT WINDOWS, MAC OS, or CHROME OS. In yet another case, an application 130 may be a built-in application in an IoT device. An application 130 may include a graphical user interface (GUI) that visually renders data and information. An application 130 may include tools for training machine leaning models 140 and/or perform inference using the trained machine learning models 140.
Machine learning models 140 may include different types of algorithms for making inferences based on the training of the models. Examples of machine learning models 140 include regression models, random forest models, support vector machines (SVMs) such as kernel SVMs, and artificial neural networks (ANNs) such as convolutional network networks (CNNs), recurrent network networks (RNNs), autoencoders, long short term memory (LSTM), reinforcement learning (RL) models. Some of the machine learning models may include a sparse network structure whose detail will be further discussed with reference to
By way of example, a machine learning model 140 may receive sensed inputs representing images, videos, audio signals, sensor signals, data related to network traffic, financial transaction data, communication signals (e.g., emails, text messages and instant messages), documents, insurance records, biometric information, parameters for manufacturing process (e.g., semiconductor fabrication parameters), inventory patterns, energy or power usage patterns, data representing genes, results of scientific experiments or parameters associated with the operation of a machine (e.g., vehicle operation) and medical treatment data. The machine learning model 140 may process such inputs and produce an output representing, among others, identification of objects shown in an image, identification of recognized gestures, classification of digital images as pornographic or non-pornographic, identification of email messages as unsolicited bulk email (‘spam’) or legitimate email (‘non-spam’), prediction of a trend in financial market, prediction of failures in a large-scale power system, identification of a speaker in an audio recording, classification of loan applicants as good or bad credit risks, identification of network traffic as malicious or benign, identity of a person appearing in the image, processed natural language processing, weather forecast results, patterns of a person's behavior, control signals for machines (e.g., automatic vehicle navigation), gene expression and protein interactions, analytic information on access to resources on a network, parameters for optimizing a manufacturing process, predicted inventory, predicted energy usage in a building or facility, web analytics (e.g., predicting which link or advertisement that users are likely to click), identification of anomalous patterns in insurance records, prediction on results of experiments, indication of illness that a person is likely to experience, selection of contents that may be of interest to a user, indication on prediction of a person's behavior (e.g., ticket purchase, no-show behavior), prediction on election, prediction/detection of adverse events, a string of texts in the image, indication representing topic in text, and a summary of text or prediction on reaction to medical treatments. The underlying representation (e.g., photo, audio and etc.) can be stored in system memory 108 and/or storage unit 110.
Input interface 114 receives data from external sources such as sensor data or action information. Output interface 116 is a component for providing the result of computations in various forms (e.g., image or audio signals). Computing device 100 may include various types of input or output interfaces, such as displays, keyboards, cameras, microphones, speakers, antennas, fingerprint sensors, touch sensors, and other measurement sensors. Some input interface 114 may directly work with a machine learning model 140 to perform various functions. For example, a sensor may use a machine learning model 140 to infer interpretations of measurements. Output interface 116 may be in communication with humans, robotic agents or other computing devices.
The network interface 118 enables the computing device 100 to communicate with other computing devices via a network. The networks may include, but are not limited to, Local Area Networks (LANs) (e.g., an Ethernet or corporate network) and Wide Area Networks (WANs). When multiple nodes or components of a single node of a machine learning model 140 is embodied in multiple computing devices, information associated with various processes in the machine learning model 140, such as temporal sequencing, spatial pooling and management of nodes may be communicated between computing devices via the network interface 118. Example Neural Network Architecture
Neural network 200 includes an input layer 202, an output layer 204 and one or more hidden layers 206. Input layer 202 is the first layer of neural network 200. Input layer 202 receives input data, such as image data, speech data, text, etc. Output layer 204 is the last layer of neural network 200. Output layer 204 may generate one or more inferences in the form of classifications or probabilities. Neural network 200 may include any number of hidden layers 206. Hidden layer 206 are intermediate layers in neural network 200 that perform various operations. Neural network 200 may include additional or fewer layers than the example shown in
Each node 210 in neural network 200 may be associated with different operations. For example, in a simple form, neural network 200 may be a vanilla neural network whose nodes are each associated with a set of linear weight coefficients and an activation function. In another embodiment, neural network 200 may be an example convolutional neural network (CNN). In this example CNN, nodes 210 in one layer may be associated with convolution operations with kernels as weights that are adjustable in the training process. Nodes 210 in another layer may be associated with spatial pooling operations. In yet another embodiment, neural network 200 may be a recurrent neural network (RNN) whose nodes may be associated with more complicated structures such as loops and gates. In a neural network 200, each node may represent a different structure and have different weight values and a different activation function.
In various embodiments, a wide variety of machine learning techniques may be used in training neural network 200. Neural network 200 may be associated with an objective function (also commonly referred to as a loss function), which generates a metric value that describes the objective goal of the training process. The training may intend to reduce the error rate of the model in generating predictions. In such a case, the objective function may monitor the error rate of neural network 200. For example, in object recognition (e.g., object detection and classification), the objective function of neural network 200 may be the training error rate in classifying objects in a training set. Other forms of objective functions may also be used. In various embodiments, the error rate may be measured as cross-entropy loss, L1 loss (e.g., the sum of absolute differences between the predicted values and the actual value), L2 loss (e.g., the sum of squared distances) or their combinations.
The weights and coefficients in activation functions of neural network may be adjusted by training and also be constrained by sparsity and structural requirements. Sparsity will be further discussed with reference to
Each of the functions in neural network 200 may be associated with different weights (e.g., coefficients and kernel coefficients) that are adjustable during training. After an input is provided to neural network 200 and passes through neural network 200 in the forward direction, the results may be compared to the training labels or other values in the training set to determine the neural network's performance. The process of prediction may be repeated for other samples in the training sets to compute the overall value of the objective function in a particular training round. In turn, neural network 200 performs backpropagation by using gradient descent such as stochastic gradient descent (SGD) to adjust the coefficients in various functions to improve the value of the objective function.
Multiple rounds of forward propagation and backpropagation may be performed. Training may be completed when the objective function has become sufficiently stable (e.g., neural network 200 has converged) or after a predetermined number of rounds for a particular set of training samples. The trained neural network 200 can be used for making inferences or another suitable task for which the model is trained.
ŷ
l
=W
l
·y
l-1
+u
l Equation 1
y
l
=f(ŷl) Equation 2
where f is any activation function, such as tanh or ReLU and ŷl is the output of the linear operation before an activation function is applied.
The above relationship may be conceptually represented as a block diagram as illustrated in
Here, a value being active may refer to a value whose mathematical operation will need to be included in order to perform the overall computation. For example, in the context of matrix multiplication, convolution, or dot product, an active value may be a non-zero value because the mathematical operation, such as addition and multiplication, of the non-zero value will need to be included in order to get to the correct result of the matrix multiplication, convolution, or dot product. A value being inactive may refer to a value whose mathematical operation may be skipped. For example, in the context of matrix multiplication, convolution, or dot product, an inactive value is zero because the mathematical operation involving zero, such as addition and multiplication, may be skipped without affecting the final result. A process tensor is dense if the percentage of active values in the tensor exceeds a threshold. Likewise, an activation is dense if the activation function will result in a number of output values in the output activation tensor yl being dense and the percentage of the active values exceeding a threshold. Using ReLU as an example, ReLU sets values that are lower than a level (e.g., 0) as 0 and allows values that are greater than the level to retain the values. Hence, it is expected that ReLU will generate about half active values if the values in the intermediate tensor ŷl are roughly equally distributed around the level. A tensor output that has about half of the values being non-zero is often considered as dense. In
The degree of sparsity for a tensor to be considered sparse may vary, depending on embodiments. In one embodiment, the number of active values in a tensor is fewer than 50% to be considered a sparse tensor. In one embodiment, the number of active values in a tensor is fewer than 40% to be considered a sparse tensor. In one embodiment, the number of active values in a tensor is fewer than 30% to be considered a sparse tensor. In one embodiment, the number of active values in a tensor is fewer than 20% to be considered a sparse tensor. The number of active values in a tensor is fewer than 15% to be considered a sparse tensor. The number of active values in a tensor is fewer than 10% to be considered a sparse tensor. The number of active values in a tensor is fewer than 5% to be considered a sparse tensor. The number of active values in a tensor is fewer than 4% to be considered a sparse tensor. The number of active values in a tensor is fewer than 3% to be considered a sparse tensor. The number of active values in a tensor is fewer than 3% to be considered a sparse tensor. The number of active values in a tensor is fewer than 2% to be considered a sparse tensor. The number of active values in a tensor is fewer than 1% to be considered a sparse tensor. The number of active values in a tensor is fewer than 0.8% to be considered a sparse tensor. The number of active values in a tensor is fewer than 0.5% to be considered a sparse tensor. The number of active values in a tensor is fewer than 0.2% to be considered a sparse tensor. The number of active values in a tensor is fewer than 0.1% to be considered a sparse tensor. The number of active values in a tensor is fewer than 0.01% to be considered a sparse tensor.
Neural network 200 with one or more nodes that have the sparse-dense or sparse-sparse structure may be referred to as a sparse neural network. A sparse neural network may be a hierarchical temporal memory system. In various embodiments, while a sparse neural network may include a large number of sparse nodes, the sparse neural network may also include some dense nodes. Also, a sparse node may be a sparse-sparse node 260 or a sparse-dense node 250. In some embodiments, a node may also be with either weight sparsity or activation sparsity.
A sparse neural network often has improved performance in terms of speed in training and inference because the large number of inactive values in the network allows the network to skip many mathematical operations. For example, many common operations in neural networks, such as convolution and tensor multiplication, may be converted to dot products. Oftentimes a processor uses dot products to compute those operations in neural networks. Zeros in the tensors will significantly simplify the number of multiplications and additions needed to perform in a dot product. In many cases, sparse neural networks may model the structure of a human brain, which appears to also rely on a large degree of sparsity. Those sparse neural networks often not only have improved speed compared to dense neural networks but also increase inference accuracy particularly in the cases of noisy environments. For example, sparse neural networks reduce the number of parameters necessary to achieve an equivalent result accuracy, leading to savings in computational infrastructure, execution time, latency, power and therefore costs. They also exhibit increased robustness to noise in real-world situations. In Edge and IoT applications, a sparse network may fit on a limited deployment platform where an equivalent dense network would not.
Internal memory 310 may be the dedicated memory for accelerator 300 that is used for storage of data fetched from system memory 108 and data outputted by computation cores 320. The data stored in internal memory 310 may include input data of neural network 200, weights and other coefficients in neural network 200, intermediate data of neural network 200, such as output activation tensor 228 that is outputted by each node 210, loss function coefficients, and other suitable data that are related to the operation of neural network 200. For each node 210, input activation tensor 220 may be saved in internal memory 310. The input activation tensor 220 may be divided into multiple units and are sent to various computation cores 320 to process in parallel. The outputs of computation cores 320s may be recombined as output activation tensor 228, which is an output of a node 210. After the operations of the nodes 210 in a layer of neural network 200 are completed, operations of nodes 210 in the next layer may begin. The output activation tensor 228 is then fetched again to one or more computation core 320 as the input activation tensor 220 of a succeeding node 210 in the next layer. The process repeats until the operations reach the output layer 204. In some embodiments, the data stored in internal memory 310 may be sparse tensors that include zeros in various locations. In some embodiments, some data in internal memory 310 may also be compressed to dense tensors by removing zeros in the tensors. Compression of sparse tensors will be discussed in further detail.
In some embodiments, an accelerator 300 may not need to include internal memory 310. Instead, data are directly fetched and written to the system memory 108.
A computation core 320 is a circuit that performs computations between two or more tensors. The tensors may be a process tensor and an activation tensor. The computation core 320 may include a number of multiply circuits 330 that perform tensor operations such as the multiplications part of dot products, tensor multiplications, convolutions. Common machine learning operations such as tensor multiplications and convolutions may be converted to dot products and be performed by multiply circuits 330. A computation core 320 may include a number of multiply circuits for performing computations in parallel.
A multiply circuit 330 may take various forms. In one embodiment, a multiply circuit 330 is a multiply-accumulate circuit (MAC) that includes multiply units and accumulators. The multiply units may be used to perform multiplications and additions. A multiply unit is a circuit with a known structure and may be used for binary multiplication or floating-point multiplication. An accumulator is a memory circuit that receives and stores values from the multiply units. The values may be stored individually or added together in the accumulator. In some embodiments, the multiply circuits 330 may only include multiply units and perform elementwise multiplications.
Computation core 320 may include circuitry upstream of multiply circuits 330 for pre-processing of various tensors such as by dividing an input activation tensor into smaller units and by compressing and converting sparse tensors to a form that is efficient for the multiply circuits 330 to process. An activation buffer 352 is a buffer circuit and related data-processing circuit for performing data processing of an input activation tensor 220 for a node 210. For example, normally an input activation tensor 220 may have a size that is significantly larger than the capacity of a multiply circuit 330. The input activation tensor 220 may be divided into multiple data subunits and be processed in parallel by different multiply circuits 330. Activation buffer 352 may include circuitry that divides the input activation tensor 220 or include different addresses for various multiply circuits 330 to fetch different portions of the input activation tensor 220. In some embodiments, activation buffer 352 may fetch the tensor values from internal memory 310. In some cases, only the active values are fetched to activation buffer 352.
Activation buffer 352 may also perform a transpose operation of the input activation tensor 220 by fetching data values in the input activation tensor 220 in an order different from the order in internal memory 310. In some cases, an input activation tensor 220 may be saved in internal memory 310 under certain dimensions such as X by Y by Z while the division of data subunits may be more efficient under the dimension Y by Z by X. The efficiency of storage and operation of data under certain dimensions may depend on the hardware landscape such as the multiplier arrangement in a multiply circuit 330 and memory structure.
A weight buffer 350 and pre-processing circuit 354 are other examples of circuitry upstream of multiply circuits 330 for pre-processing of various tensors. For an operation with respect to a given node 210 in neural network 200, weight buffer 350 fetches the tensor values of process tensor 222 from internal memory 310 or system memory 108. Similar to activation buffer 352, in some cases weight buffer 350 may only fetch the active values in process tensor 222.
Pre-processing circuit 354 may include different types of circuits that are used to pre-process process tensor 222 and input activation tensor 220. Process tensor 222 and input activation tensor 220 may be associated with different degrees of sparsity. For example, in one case, process tensor 222 may be sparse while input activation tensor 220 may be dense. In another case, both process tensor 222 and input activation tensor 220 may be sparse. In yet another case, process tensor 222 may be dense and input activation tensor 220 may be sparse. Pre-processing circuit 354 may pre-process process tensor 222 and input activation tensor 220 in different ways, depending on their sparsity. For example, in some embodiments, process tensor 222 and input activation tensor 220 may be processed separately. In some embodiments, when both process tensor 222 and input activation tensor 220 are sparse, pre-processing circuit 354 may process the two tensors together.
In some embodiments, pre-processing carried out by the pre-processing circuit 354 may include identifying locations of active values in the process tensor 222 and input activation tensor 220. Pre-processing circuit 354 may scan through a sparse tensor and identify the locations of the active values in the sparse tensor. The locations may take the form of the locations in the tensor (e.g., a location at the third row and the fifth column in the tensor) and may also take the form of memory addresses of active values (e.g., an active value being saved in the memory address of 0xC0010000). Pre-processing circuit 354 may only transmit the active values to multiply circuits 330 for computations. In some embodiments, pre-processing circuit 354 may identify dense pairs that have active values at the same tensor location in both process tensor 222 and input activation tensor 220. Pre-processing circuit 354 may only transmit the dense pairs to multiply circuits 330 for computations. In other words, in some cases, pre-processing circuit 354 may exclude the transmission of inactive values in process tensor 222 or input activation tensor 220 to multiply circuits 330.
In some embodiments, pre-processing carried out by the pre-processing circuit 354 may also include compress a sparse tensor or combine multiple sparse tensors to a dense tensor. In various computations such as dot products and other multiplications, the results will be zero if one of the input values is zero. As such, the processing of those inactive values may be skipped in the multiply circuits 330. In some cases, when two tensors are multiplied, only multiplications of two active values are to be computed. As such, in some embodiments, pre-processing circuit 354 may compress a sparse tensor by converting the sparse tensor into a smaller-size dense tensor. In some embodiments, such as in complementary sparsity that will be discussed in
In some embodiments, pre-processing circuit 354 may also store the addresses of active values in the tensors so that the dense tensors and output tensors generated by multiply circuits 330 may be convert back to sparse tensors. For example, in complementary sparsity, multiple sparse process tensors may be combined as a dense process tensor. Pre-processing circuit 354 may use a state vector and tensor identifiers keep track of which locations correspond to which sparse process tensors. Pre-processing circuit 354 may function as a permutation circuit that re-routes and re-arranges values in various tensors so that values in a combined tensor may be grouped based on the corresponding sparse tensors. Example structures and operations of permutation circuits are further discussed in
Pre-processing circuit 354 may also perform other data pre-processing such as transposing process tensor 222 and input activation tensor 220. Pre-processing circuit 354 may also subdivide the tensors in a way that is efficient for multiply circuits 330 to process. The pre-processed tensors are fetched and sent to multiply circuits 330 to perform computations with input activation tensor 220.
After results of multiply circuits 330 are computed, the results are sent to one or more adder trees 360 to generate an intermediate output tensor ŷl. The results (products) of the multiply circuits 330 are then combined in adder trees 360. For example, in performing a dot product, multiply circuits 330 perform the multiplication and accumulation parts of the dot product and the results of different multiply circuits 330 are added together at the adder tree 360 to generate the final result. Alternatively, the accumulation parts may be performed in the adder tree, depending on the hardware architecture and the operations. In some embodiments, input activation tensor 220 is divided into multiple subunits for parallel processing in the multiply circuits 330. In some embodiments, for complementary sparsity, the products of the multiply circuits 330 are re-arranged by permutation circuits so that values corresponding to the same sparse process tensor are sent to the same adder tree 360. In some embodiments the computations performed on the sparse tensors are not multiplication and addition, but any pair of computations. In those embodiments, the multiply circuits 330 may be replaced by or reconfigured to a first computation circuit computing the first operator, and the adder tree may be replaced by or reconfigured to a second computation circuit computing the second operator. In some embodiments, the multiply circuits 330 (in
An activation circuit 370 is a circuit downstream of adder tree 360 to perform the operation specified in the activation function. Activation may be dense or sparse. Examples of dense activation include more conventional activations such as ReLU and tanh. Examples of sparse activation include K-winner take all (k-WTA) activation that will be discussed in further details. Activation circuit 370 may include a number of comparator circuits that are used for the ReLU activation function. Activation circuit 370 may also include comparator trees for determining top K highest values in a tensor in the case of a sparse K-winner activation function. Activation circuit 370 generates the output activation tensor 228 from the intermediate output tensor. Activation circuit 370 may set a number of values in the intermediate output tensor to zero, depending on the type of activation function. Hence, output activation tensor 228 may be a dense or sparse tensor. In some embodiments, one or more input tensors are previously compressed, activation circuit 370 may also expand the output activation tensor 228 back to the original size. Output activation tensor 228 is transmitted to internal memory 310 or system memory 108 as the output of a particular node 210. The output activation tensor 228 is fetched subsequently as input activation tensor 220 when another round of operations related to a subsequent node 210 begins. In some embodiments, the output activation tensor 228 may be directly sent within in the accelerator 300 as the input activation tensor of the next round, as represented by arrow 372. Example structures of the activation circuit 370 are further discussed in
In some cases, bias factors 364 may also be fetched from internal memory 310. The bias factors 364 may be added or multiplied to some of the output values of some adder trees 360. For example, in some cases, boosting techniques may be used in association with a k-WTA activation. The k-WTA activation select the highest k output values among the nodes (or among a set or a partition) and set all other output values to zeros. In some cases, values corresponding certain nodes, such as nodes that are not selected in previous rounds of prediction or training, are manually boosted by increasing the output values. The boosting is used to increase the chances of some less frequently selected nodes to be selected in the k-WTA activation scheme. The magnitude of the boosting for each node may be a hyperparameter that is configurable or may be learned in the training.
The use of a sparse neural network and an accelerator that is efficient at operating with the sparse neural network reduces the number of computations and power consumptions of the accelerator. The sparse neural network also reduces storage requirements and working memory bandwidth. The accelerator improves the speed of a computing device and is suitable for use in computing devices that have limited power or computing capacity, such as IoT devices and in the case of edge computing.
Tensor 402 is an example unstructured tensor. Tensor 402 and various tensors in
The use of unstructured tensors in an accelerator 300 may significantly slow down the speed of operation due to the sparse marshalling problem in identifying the randomly located active values. As mentioned in
The marshalling problem may be illustrated by an example. The expected number of multiply-accumulate operations for a sparse-sparse (both tensors are sparse) dot product is the product of the tensors' densities. In a 1600-element dot product, if the first tensor's density is 5% and the second tensor's density is 12.5%, the expected number of the multiply-accumulate operations between two active values is only 10. This represents 160 times of computation reduction. To realize this computation reduction, the sparse tensors may be distilled by pre-processing circuit 354 to eliminate the operand pairs that have an inactive value involved and keep only the mutually active operand pairs from each sparse tensor. This distillation process may be referred to as a sparse to dense compression. However, without specific structured tensors and circuitry, rendezvousing these mutually active pairs can be a challenging problem. Also, in an unstructured tensor, the positions of active values within a tensor usually do not follow an algorithmic pattern. During compression from a sparse tensor to a dense tensor, coordinates will need to be associated with the active values. There will be storage and performance overhead in an accelerator for accessing these coordinates. General hardware circuitry, whether conventional CPU, GPU, FPGA, or ASIC, may take a significant time to compare both tensors to determine the locations with active values in both tensors. The time or the hardware footprint needed to perform the searching may rival a dense operation that conducts the dot products in all 1600 locations by vector processing with single instruction multiple data (SIMD) units. The searching of those locations may be referred to as the marshalling problem.
According to an embodiment, the sparsity of tensors in a neural network 200 may be constrained so that the active values are spatially structured. For example, structured tensors may be achieved in the training of neural network 200 by imposing one or more constraints on how the active values are distributed. The tensors 404 through 438 illustrate two types of structure, which are referred to as block structure and partitioned structure. A tensor may also be in a combination of these two types of structures. In a block structure, a tensor may be divided into blocks, which are a group of data value locations in the tensor. In the block structure, the active values are concentrated in a subset of blocks, leaving the rest of the blocks completely inactive. In a partitioned structure, the tensor may be divided into sub-volumes. One or more constraints may be imposed equally on each sub-volume. For example, the number of active values in each sub-volume may be a fixed number so that the partitions have a balanced number of active values. The partitioned structure results in less variability of the sparsity, which in turn reduces the combinatorics of the marshalling problem. The constraints of blocks and partitions may be imposed on one or more dimensions of the tensor. A tensor may also have both the block and partitioned structures in one or more dimensions.
Tensors 404 through 438 illustrate various examples of structures in different dimensions, according to different embodiments. In tensor 404, the tensor is divided into blocks in x-dimension. Each block includes 1×4 value locations. Each block is either active or inactive. In an active block, at least one of the values is active. In an inactive block, all of the values are inactive. In tensor 406, the tensor is divided into partitions in x-dimension. Each row is a partition. A constraint is imposed on tensor 404 so that each row (each partition) has the same number (4) of active values. In tensor 408, both block structure and petitioned structure are imposed in x-dimension. Similar to tensor 404, tensor 408 is divided into 1×4 blocks. Each row in tensor 408 has one and only one active block, which is a condition imposed on the partition.
Tensor 412 through 438 illustrate additional structures that are in different dimensions and different combinations. For example, tensor 412 is a block structure in y-dimension. Tensor 414 is a block structure in both x and y dimensions. Each block includes 2×2 value locations. In tensor 416, block structure is imposed in y-dimension while the partition structure is imposed in the x-dimension. As such, each row (x-dimension) has four dense vertical blocks. Tensor 418 is divided by 2×2 x-y blocks. Partitioning is imposed in x-dimension so that each row in tensor 418 has 2 blocks. Tensors 422, 424, 426, 428, 432, 434, and 436 are additional examples of different combinations of block and partitioned structures. Tensor 438 is divided by 2×2 x-y blocks. Partitioning is imposed in both x-dimension and y-dimension so that each row in tensor 438 has 2 blocks. Each column in tensor 438 also has 2 blocks.
The block and partitioned structures can be applied to both input activation tensor 220 and process tensor 222. Each of the input activation tensor 220 and process tensor 222 may be blocked and partitioned in a similar manner but in different dimensions so that the pairing of input activation tensor 220 and process tensor 222 can predictably limit the number of computations.
In operation 460, block-x and partitioned-x tensor 408 may represent the process tensor 222 and block-y and partitioned-y tensor 432 may represent the input activation tensor 220. The tensor 408 and tensor 432 both have block structure and partitioned structure, but both blocks are partitions in different dimensions. In this case, rows of tensor 408 and columns of tensor 432 have a fixed number of blocks. Hence, operation 460 can have the maximum of 1 single instruction multiple data (SIMD) block multiply-accumulate operations per dot-product.
In operation 470, block-x and partitioned-xy tensor 428 may represent the process tensor 222 and block-y and partitioned-xy tensor 436 may represent the input activation tensor 220. The tensor 428 and tensor 436 both have block structure and partitioned structure, but the blocks are divided in different dimensions. In this case, both rows and columns of tensor 428 and the row and columns of tensor 436 have a fixed number of blocks. Hence, operation 470 can have the maximum of 1 single instruction multiple data (SIMD) block multiply-accumulate operations per dot-product.
The process 500 may be performed by a computing device, such as computing device 100. The computing device may be equipped with an accelerator 300, 600, or 650 and may perform one or more steps of this process using the accelerator 300, 600, or 650. However, in some embodiments, the process may also be performed using a CPU, a GPU, or any combination of processors. The process may be embodied as software algorithm that may be stored as computer instructions that are executable by one or more processors and certain hardware architecture described in this disclosure may speed up the computation. The instructions, when executed by the processors, cause the processors to perform various steps illustrated in
The computing device initializes 505 a neural network with a plurality of nodes. The structure of the neural network may depend on its type, which can be CNN, RNN, LSTM, etc. The structures and operations of the nodes can be different among the nodes. The nodes may each be associated with a process tensor and an activation tensor. The structure and operation related to the tensors are discussed in
The computing device imposes 510 one or more structural constraints to limit the distribution of active values of the process tensor. The constraints may be based on one or more code instructions in training the neural network that defines the configuration of the neural network. In complementary sparsity, the constraints may include the locations of active values so that no two sparse process tensors within a subset contain an active at precisely the same location. In some embodiments, the constraints do not dictate the relative positions of the active values or the permissible sparsity levels except may be a minimum sparsity. Given the flexibility of the constraints, experimental results show that neural networks trained with the complementary sparsity constraints do not compromise on accuracy when compared to unstructured sparsity. In some embodiments, additional constraints may additionally be imposed. For example, one or more blocky or partitioned constraints may also be applied.
One or more structural constraints may also be imposed for an activation tensor by way of the K-winner activation function. Referring temporarily back to
While the K-winner activation function is described as an example of a sparse activation function, other sparse activation functions may also be used in various embodiments. A sparse activation function is an activation function that results in a sparse output. The activation function is applied to the computation result in a neural network node. For example, in the K-winner activation function, the number of active values in the output may be limited by K. Alternatively, or additionally, a threshold approach may be used as a sparse activation function. Values that are below the threshold are set to inactive (e.g., set to zeros). The threshold may be global or local, static or dynamic. The threshold is applied to an entire tensor in the global approach while the threshold is only applied to a certain subset of data (e.g., a block or a partition) in a local approach. In a static approach, a predetermined threshold value may be used. In a dynamic approach, a threshold value may vary based on factors to be determined during the training. For example, statistics may be performed on a set of values on the fly to determine a dynamic threshold cutoff to set some of the values to zeros.
The structure constraint for the K-winner approach for the activation tensor can be global or local. If K-winner is applied to an entire tensor, the K-winner approach may be referred to as a global K-winner. If K-winner is applied to a subset of the tensor, such as a dimension, a block, or a partition of the data, the K-winner approach may be referred to as local K-winner. The computing device may train 515 the neural network using one or more structural constraints. The computing device may use one or more processors, such as an accelerator 300, a CPU, or in combination, to perform different computations associated with training of the neural network. The training 515 may include forward propagation 520 and backpropagation 530. In forward propagation 520, the processor performs computations as defined by each node in the forward direction as illustrated in
In forward propagation 520, different operations may be performed based on the sparsity of a node. The operations may include combining sparse process tensors to a dense process tensor, multiply-accumulation, and post-processing of tensors. The computing device may combine 522 a plurality of sparse process tensors to a dense process tensor.
In the particular example shown in
The computing device may control 524 the permutation logic of the combined dense process tensor 560. Since the dense process tensor 560 is combined from multiple sparse process tensors, the computing device needs to track the positions of values that correspond to different sparse process tensors. The computing device also routes the appropriate computation products separately for each output. Each dense process tensor 560 is associated with a state vector that controls the permutation logic to produce the grouping of the sparse process tensors. Collectively the dense process tensor 560 and the state vector can be described as an augmented process tensor. The organization of dense process tensor 560 (complementary tensor) computation may be performed before or after multiplication. For example, step 524 may be performed before or after step 526. In some embodiments, in a pre-multiplication routing, the processor lines up the values in the activation tensor 550 with the weights that are clustered into groups. The pre-multiplication permutation may be referred to as a gather operation. In some embodiments, in a post-multiplication routing of elementwise products, the processor may steer the elementwise products into groups. The post-multiplication permutation may be referred to as a scatter operation. The computing device performs either pre-multiplication routing or post-multiplication routing to segregate the product results and steer the product results toward independent adder trees to be accumulated.
The computing device may perform 526 elementwise operations between the dense process tensor 560 and the activation tensor 550. The elementwise operations may be multiplication operations to generate multiply products (e.g., Hadamard products) and may be performed in parallel by a number of multiply circuits 330 in parallel. Similar to the discussion of
The computing device separates the elementwise products into different results 570, 572, 574, 576, and 578 based on the permutation logic. The processor may perform 528 accumulations of elementwise products that correspond to sparse process tensors 540 through 548. For example, the elementwise products of the multiply circuits 330 are aggregated in adder trees 360. Each accumulated result corresponds to an original sparse process tensor. As such, multiple sparse process tensors 540 through 548 are multiplied with the activation tensor 550 in a single multiplication operation through the dense process tensor 560 and the accumulated results, which correspond to results of different nodes, are separately generated. Similar to the discussion of
The computing device may also apply 529 activation functions to the accumulated results generated by the adder trees 360. The activation function may be a dense activation function such as ReLU or tanh. The activation function may also be a sparse activation function such as K-winner. The activation function may further be a sparse and structured activation function such as blocky K-winner or partitioned K-winner. Blocky K-winner may refer to a division of the tensor by blocks and selection of top K blocks. Partitioned K-winner may refer to a division of the tensor by partitions and selection of top K values in each partition. After completing the computations of a node, the processor may perform computations on a subsequent node in the forward direction of the neural network until an inference result is made. The inference result is compared to the actual label of a training sample.
In backpropagation 530, the computing device may adjust 552 the weight values in process tensors of various nodes under the structural constraints, such as the complementary sparsity constraints. For example, the weight values may be adjusted using techniques such as coordinate descent to change the values in directions that will more likely for the neural network to generate the correct inference result.
After the neural network is trained with training samples, the neural network may be used to make 535 inferences from actual samples. The inference may be performed using the steps described in the forward propagation 530. Since the sparsity distribution and the active values in a process tensor may be fixed during the training, the combination of multiple sparse process tensors 540, 542, 544, 546, and 548 may be performed offline as a preprocessing step. The inference is also accelerated because the trained sparse process tensors 540, 542, 544, 546, and 548 are combined as the dense process tensor 560.
In the combine step, multiple sparse process tensors 580, 582, and 584, each having 33% sparsity in this example, are combined and overlaid to form a single dense process tensor 586. The active values in each of the sparse process tensors 580, 582, and 584 remain in the same positions in the single dense process tensor 586.
In the multiply step, each value in the combined dense process tensor 586 is multiplied with the corresponding value in the activation tensor 590 in elementwise operations to generate elementwise products (e.g., Hadamard products). The elementwise products may be represented as a tensor form 592.
In the route step, the appropriate elementwise products are routed separately for each output. For example, the elementwise products that correspond to the first sparse process tensor 580 are routed together based on the permutation logic in the state vector. Likewise, the elementwise products that correspond to the second sparse process tensor 582 and the elementwise products that correspond to the third sparse process tensor 584 are respectively routed together based on the permutation logic.
In the sum step, the routed products are aggregated to form a separate result that corresponds to each sparse process tensor. Each separate result is a result of a node in a layer of the neural network. The sum step is an accumulation step that may be performed by the adder trees 360.
In the activation step, one or more activation criteria may be applied to the results of those nodes, which are aggregated in the sum steps. The activation criteria may be ReLU, tanh, LSTM gates, or other common activation criteria in a dense activation neural network. In a sparse activation neural network, the activation criteria may be a form of K-winner. The values of the results of those nodes are compared and top K values are selected as the winners of an activation selection. Other values are set to zero.
A group of permutation state registers 610 are added to the accelerator 600 and the accelerator 650. As discussed in association with the process 500, multiple sparse process tensors are combined into a dense process tensor. In making an inference after training, the combination and generation of the dense process tensor may be performed as a pre-processing step and the dense process tensor may be stored in a memory such as internal memory 310 or system memory 108. The permutation state registers 610 are used to store the state vector that tracks the permutation logic when combining multiple sparse process tensors into a sense process tensor. For example, the permutation logic may store the corresponding active values of sparse process tensors as a sequence of sparse process tensor identifiers.
The permutation circuit 605 performs the routing of values based on the permutation logic stored in the permutation state registers 610. In pre-multiplication routing, the permutation circuit 605 may be an example of the pre-processing circuit 354. In the accelerator 600, a gather operation may be performed as a pre-multiplication routing operation. In a pre-processing stage, the values in the dense process tensor stored in a memory such as system memory 108 may be saved based on the order of the sparse process tensors (e.g., the order of the nodes in a layer of the neural network). For example, the values in the dense process tensor may have been re-routed in a pre-processing stage so that the active values in a first sparse process tensor will go first, then the active values in a second sparse process tensor, and so forth, even though such an order is not the actual order of the values in the dense process tensor. To perform the elementwise operations, the permutation circuit 605 may re-route and group the values in the activation tensor stored in activation buffer 352 based on the corresponding permutation and ordering of the routed dense process tensor. Elementwise operations may then be performed between the routed dense process tensor and the routed activation tensor in multiply circuit 330. The elementwise products are already gathered and ordered based on a certain order of the nodes in the neural network. As such, accumulations may be performed separately for each node.
In the accelerator 650, a scatter operation may be performed as a post-multiplication routing operation. The activation tensor and the dense process tensor may be directly multiplied in an elementwise manner using the multiply circuits 330 without re-routing. Hence, for example, the dense process tensor 586 shown in
Whether the pre-multiplication routing or post-multiplication routing is used may depend on embodiments. In some embodiments, if the multiplication operands are floating-point numbers, the pre-multiplication routing or post-multiplication routing consumes equal or similar resources. In some embodiments, if the multiplication operands are fixed-point numbers, pre-multiplication routing may be preferable, in which the values in the activation tensors are re-arranged. In fixed-point post-multiplication routing, the product values are often twice the width of the activation operand values, and therefore require twice the resources (e.g., multiplexors, wires) to re-route the values to group the values based on nodes of the neural network.
The two sparse process tensors 650 and 655 are combined to form a complementary tensor 660, which is multiplied with an activation tensor 665 in elementwise operations to generate an elementwise product tensor 670. The elementwise product tensor 670 has a white block that represents the location where both sparse process tensors 650 and 655 have an inactive value. The elementwise product tensor 670 is flattened to a linear array 675. The linear array 675 has the same order of values as the elementwise product tensor 670 and, hence, has the elementwise product values of both sparse process tensors 650 and 655 mixed. In flattening the elementwise product tensor 670, the processor, such as accelerator 650, may remove any common inactive position(s). For example, since the elementwise product tensor 670 contains a white block, the 5×5 tensor is flattened to a 1×24 array with one value removed. Additional values may be removed if more common inactive positions are presented. The linear array 675 is then re-arranged to form a permuted linear array 680 by the permutation circuit 605. Values in linear array 675 are routed to as groups based on the sparse process tensors 650 and 655. Each group can be sent to an adder tree for accumulation. Example circuitry of the permutation circuit 605 is discussed in
The pre-multiplication routing may be carried out in a similar fashion illustrated in
The switch circuit 710 is a simple circuit unit that maps 2 inputs to 2 outputs using a control bit. A first value of the control bit directs the switch circuit 710 to simply pass the 2 inputs to 2 outputs. The second value of the control bit directs the witch circuit 710 to swap the inputs. The permutation network circuit 720 is a combination of multiple switch circuits 710 in a particular order so that N inputs can be permuted in any order as N outputs. The example permutation network circuit 720 shown in
The permutation network circuit 720 is less resource-intensive to implement the desired reordering than parallel operations that may be used to permute a vector into a particular order. The permutation network circuit 720, such as the Waksman network, takes multiple nodes and logic stages to effect a permutation. In some embodiments, the permutation circuit 605 may be subdivided into multiple smaller permutation networks. The number of subdivisions corresponds to the number of samples for each sparse kernel.
The order of the values of activation array 750 is re-arranged by different permutation network circuits 720. In this example, the maximum length of activation array 750 is 25. In the particular example of circuitry 700, five different permutation network circuits 720 are included in the circuitry 700. In various embodiments, other numbers of permutation network circuits 720, such as a single one, may also be used. After the values in the activation array 750 are re-arranged, the values are multiplied with the weight array 760 in an elementwise fashion to generate elementwise products 770. The elementwise products 770 may be statically routed to the adder trees 360. As the example dense process tensor is combined from 5 sparse process tensors, the elementwise products 770 are routed to five adder trees 360.
The partition sparsity constraint may further improve the improvement of the neural network and the associated hardware. The process tensor is subject to a partition sparsity constraint so that the process tensor is divisible into five (or N for other sizes in other examples) different partitions. As such, smaller permutation network circuits 720, such as the one illustrated in
For each of the K non-zero activation values in the activation tensor, the index of the value may be used to extract the relevant weight values, which are then multiplied in an elementwise fashion. The individual terms of the elementwise products are routed separately to compute the sums for each output channel.
To compute the operation efficiently, a preprocessing step that combines sets of sparse process tensors 810 into smaller sets of combined dense tensors may be used in a fashion described in
At inference time, the following formula generates the lookup address for the AWT 830, where (Wx, Wy) are the coordinates of columns in the CSFB 820, is the index associated with j'th non-zero activation value, and Cin is the number of channels in the input activation tensor to the layer:
Address=Ij+Wx*Cin+Wy*Cin*W Equation (3)
A scaling issue with this scheme is the amount of memory consumed by the complete AWT structure. The total number of bits for the multi-ported AWT 830 is:
B
M
=C
in
*W
2
*K*L*B
E Equation (4)
Here, BE is the size of each element and is the sum of the bit size of the weight element value, BW, plus the bit size of the associated TID, BID. In some embodiments, 8-bit weights are used so BW=8. To determine BID, the number of sparse tensors that can fit into a single CSFB 820 is calculated. The active weight values in each sparse tensor may be distributed using partitioned weight sparsity along the Cin dimension. With N active values in each column of the sparse process tensor, the number of sparse process tensors in a single CSFB 820 is Cin/N. Therefore, BID=[log 2(Cin/N)]. If Gout is the number of output channels produced by the layer, the number of CSFBs 820 in an AWT 830, L, is equal to Cout/(Cin/N). Plugging this into Equation (4) yields:
B
M
=W
2
*C
out
*N*K*B
E Equation (5)
In some embodiments, the size of memory decreases as activation sparsity is increased (decreasing K). Similarly, the size of memory decreases as the weight sparsity is increased (decreasing/V). Therefore, the memory savings with weight and activation sparsity are multiplicative. Overall, with sparse-sparse networks, this approach of replicating weights enables far higher throughput with favorable memory scaling.
In some embodiments, an accelerator is further designed for efficient routing of the elementwise products from the elementwise operations. After an activation value is multiplied by a weight value, to complete the computation such as convolution, each resulting elementwise product is combined with the other products corresponding to the same sparse process tensor to generate an accumulated value. The relevant products are identified using the TIDs, which are copied and carried along with the computation, as shown in
In parallel routing, routing of multiple elementwise products to non-conflicting inputs in an adder tree may introduce additional complexity. The parallel routing routes elementwise products based upon the TIDs. Additionally, destination address bits may be needed to designate the specific input port of the adder in which an elementwise product should land. This may be resolved with an arbiter 1010, which provides these additional address bits before the elementwise product is passed to a larger multiplexer network. This is indicated in
Various factors may be used to improve or adjust the efficiency of the computation. For example, in some embodiments, a partition sparsity constraint may be used. Sparsity partitioned in the channel dimension, as reflected in the range of TIDs, may reduce the bit size of the TIDs since only sufficient bits are needed to identify the sparse process tensor within the channel dimension, not the location within the W2*Cin locations of a dense process tensor. Other factors that may affect the computation efficiency may include K and N. Small values for K, reflecting high activation sparsity, reduces the number of low order bits needed for adder tree input port assignment in the parallel implementation. Small values of N, reflecting high weight sparsity, also reduce the number of low order bits needed, since the number of product terms which can be directed towards a single adder tree is min(K, N).
The arbiter circuit 1010 generates the low order address bits from the set of K Tensor IDs (TIDs). Each occurrence of a product with the same TID effectively increments a count associated with that TID. This is done with a bit-wise prefix sum module for each TID, where the positions of the input “1” bits correspond to the storage order of the products and their TIDs. Referring to 10C, each of the K TIDs is fed into the select lines of a 1-to-F single bit demultiplexer. The inputs to the demultiplexers are tied to logical “1”, while the outputs of each demultiplexer is distributed to F instances of Prefix Sum circuits shown in
The K bits inputs to each Prefix Sum circuit are summed to produce a K*BT bit wide output.
In some embodiments, for K-winner take all (k-WTA) techniques, activation sparsity may be induced by explicitly restricting the number of active output elements to the K largest values that are produced by a layer. In some cases, determining these top K values efficiently can represent a significant obstacle to the effective use of activation sparsity. The time and resources expended performing the sorting operation may erode the performance benefits associated with leveraging the resulting sparsity in subsequent processing. k-WTA implementations may fall into two broad categories. In a global k-WTA, all output elements in an output activation tensor are examined to determine the K largest to be selected and the rest to be set as zeros. In some embodiments, global k-WTA may be used in linear layers of a neural network. In a local k-WTA, the activation is partitioned into smaller units, and only the elements belonging to a partition are compared to each other. In some embodiments, local k-WTA may be used in convolutional layers of a neural network, where the winner-take-all competition happens along a specific dimension, such as the channel dimension. The process illustrated for determining k-WTA may be carried in the activation circuit 370.
The activation memory 1110 may also receive biases 364 as shown in
For improved performance, an implementation may process multiple output elements in parallel. In this scenario, multiple histograms are built in parallel and then combined to determine the overall cutoff value. An example of this implementation is illustrated in
The use of partition sparsity constraint may provide significant efficiency benefits in a sparse activation operation. In some embodiments, such as for convolutional layers, activation tensors and outputs may have a natural partitioning in the channel dimension. When the top-k operation in k-WTA is implemented as a sorting operation, which may have the complexity O(N*log(N)) either in time or hardware resources, partitioning may provide significant efficiency benefits. The position of each result value produced by the convolutional layer may be tracked through the sorting process. This is achieved by appending an index to each data value entering the sorting function.
In some embodiments, sorting may be performed in several stages. Since it is only needed to find the top K values in each set of output values, the ordering of the low-valued elements is immaterial. As K decreases with increasing activation sparsity, the cost of sorting implementation may fall accordingly. First, each set of output values may be subdivided into M sub-vectors. Each sub-vector is sent through a sorting network. The sorted sub-vector is subsequently loaded into one of M first-in-first-out (FIFO) circuits, with each sub-vector's largest value at the front of the FIFO queue.
A vector composed of the M top-of-FIFO values is then passed through a log2(M) stage comparator tree, in order to determine the maximum value in the output set. The maximum value is retained, and its associated indexing information (which indicates in which FIFO the value was located) is used to pop that element from the appropriate FIFO, exposing the FIFO's next largest element. This process is repeated K times, at which point the output vector has been filled with the top K elements and is passed to the next processing layer. In some embodiments, a 64-element output set is subdivided into eight 8-element sub-vectors. The sorting network may include 19 comparators, arranged into depth 6 layers. There are 8 FIFO circuits, and a 3-level comparator tree is used to determine the maximum value in the 8-element top-of-FIFO vector.
To prevent bottlenecks, the performance of the k-WTA implementation may be matched to the performance of the convolutional operator. The incoming results can either arrive in serial bursts or as complete result vectors. A k-WTA implementation could wait until all bursts have been concatenated and a complete output result is available, or take advantage of the burst intervals and combinationally sort the burst values before loading the values into one of the FIFOs.
Upon reading this disclosure, those of skill in the art will appreciate still additional alternative designs for processing nodes. Thus, while particular embodiments and applications have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope of the present disclosure.
The present application claims the benefit of U.S. Provisional Patent Application 63/218,354, filed on Jul. 4, 2021, which is hereby incorporated by reference in its entirety.
Number | Date | Country | |
---|---|---|---|
63218354 | Jul 2021 | US |