Aspects of the present disclosure generally relate to neural networks, and more particularly to bundling key tensors and value tensors to reduce memory between split networks.
Artificial neural networks may comprise interconnected groups of artificial neurons (e.g., neuron models). The artificial neural network (ANN) may be a computational device or be represented as a method to be performed by a computational device. Convolutional neural networks (CNNs) are a type of feed-forward ANN. Convolutional neural networks may include collections of neurons that each have a receptive field and that collectively tile an input space. Convolutional neural networks, such as deep convolutional neural networks (DCNs), have numerous applications. In particular, these neural network architectures are used in various technologies, such as image recognition, speech recognition, acoustic scene classification, keyword spotting, autonomous driving, and other classification tasks.
Given the many useful applications of neural networks, there is increasing demand for the use of neural networks to solve increasingly complex problems in further areas of application. One area of exploration is generative artificial intelligence. Large language models (LLMs) have made significant advances in the natural language understanding domain, and have gained popularity with respect to textual generative tasks as well as tasks that involve modelling information from textual and visual domains. LLMs may receive a prompt from a user, and in turn, may generate a response or completion.
In one aspect of the present disclosure, a processor-implemented method includes bundling a set of key (K) tensors into a single K tensor associated with an attention layer of a neural network model. The method further includes bundling a set of value (V) tensors into a single V tensor associated with the attention layer. The method also includes processing, via one or more application processors, the single K tensor and the single V tensor. The method further includes executing the neural network model based on processing the single K tensor and the single V tensor.
Another aspect of the present disclosure is directed to an apparatus including means for bundling a set of key (K) tensors into a single K tensor associated with an attention layer of a neural network model. The apparatus further includes means for bundling a set of value (V) tensors into a single V tensor associated with the attention layer. The apparatus also includes means for processing, via one or more application processors, the single K tensor and the single V tensor. The apparatus still further includes means for executing the neural network model based on processing the single K tensor and the single V tensor.
In another aspect of the present disclosure, a non-transitory computer-readable medium with non-transitory program code recorded thereon is disclosed. The program code is executed by a processor and includes program code to bundle a set of key (K) tensors into a single K tensor associated with an attention layer of a neural network model. The program code further includes program code to bundle a set of value (V) tensors into a single V tensor associated with the attention layer. The program code also includes program code to process, via one or more application processors, the single K tensor and the single V tensor. The program code still further includes program code to execute the neural network model based on processing the single K tensor and the single V tensor.
Another aspect of the present disclosure is directed to an apparatus having one or more processors, and one or more memories coupled with the one or more processors and storing instructions operable, when executed by the one or more processors, to cause the apparatus to bundle a set of key (K) tensors into a single K tensor associated with an attention layer of a neural network model. Execution of the instructions also cause the apparatus to bundle a set of value (V) tensors into a single V tensor associated with the attention layer. Execution of the instructions further cause the apparatus to process, via one or more application processors, the single K tensor and the single V tensor. Execution of the instructions still further cause the apparatus to execute the neural network model based on processing the single K tensor and the single V tensor.
Additional features and advantages of the disclosure will be described below. It should be appreciated by those skilled in the art that this disclosure may be readily utilized as a basis for modifying or designing other structures for carrying out the same purposes of the present disclosure. It should also be realized by those skilled in the art that such equivalent constructions do not depart from the teachings of the disclosure as set forth in the appended claims. The novel features, which are believed to be characteristic of the disclosure, both as to its organization and method of operation, together with further objects and advantages, will be better understood from the following description when considered in connection with the accompanying figures. It is to be expressly understood, however, that each of the figures is provided for the purpose of illustration and description only and is not intended as a definition of the limits of the present disclosure.
The features, nature, and advantages of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings in which like reference characters identify correspondingly throughout.
The detailed description set forth below, in connection with the appended drawings, is intended as a description of various configurations and is not intended to represent the only configurations in which the concepts described may be practiced. The detailed description includes specific details for the purpose of providing a thorough understanding of the various concepts. However, it will be apparent to those skilled in the art that these concepts may be practiced without these specific details. In some instances, well-known structures and components are shown in block diagram form in order to avoid obscuring such concepts.
Based on the teachings, one skilled in the art should appreciate that the scope of the disclosure is intended to cover any aspect of the disclosure, whether implemented independently of or combined with any other aspect of the disclosure. For example, an apparatus may be implemented or a method may be practiced using any number of the aspects set forth. In addition, the scope of the disclosure is intended to cover such an apparatus or method practiced using other structure, functionality, or structure and functionality in addition to or other than the various aspects of the disclosure set forth. It should be understood that any aspect of the disclosure disclosed may be embodied by one or more elements of a claim.
The word “exemplary” is used to mean “serving as an example, instance, or illustration.” Any aspect described as “exemplary” is not necessarily to be construed as preferred or advantageous over other aspects.
Although particular aspects are described, many variations and permutations of these aspects fall within the scope of the disclosure. Although some benefits and advantages of the preferred aspects are mentioned, the scope of the disclosure is not intended to be limited to particular benefits, uses or objectives. Rather, aspects of the disclosure are intended to be broadly applicable to different technologies, system configurations, networks, and protocols, some of which are illustrated by way of example in the figures and in the following description of the preferred aspects. The detailed description and drawings are merely illustrative of the disclosure rather than limiting, the scope of the disclosure being defined by the appended claims and equivalents thereof.
Large language models (LLMs) are examples of models that understand natural language. LLMs have gained popularity with respect to textual generative tasks as well as tasks that involve modeling information from textual and visual domains.
In many cases, an LLM uses a transformer (e.g., transformer model) based architecture to process sequences of data, particularly text. In an LLM, the transformer receives a sequence of tokens, which could be, for example, words, parts of words, or characters, and processes the tokens through its layers. As the data flows through each layer, the model learns increasingly complex representations of the input data. This allows the LLM to perform a wide range of language-related tasks, such as text completion, translation, summarization, and question-answering, with a high degree of fluency and coherence.
In transformers used for LLMs, keys (K) and values (V) are used to selectively focus on relevant parts of the input data. Each input item, such as a word in a sentence, is converted into a numerical vector. From each of these vectors, the model produces three new vectors: the query (Q), the key (K), and the value (V), each serving a distinct purpose in the attention mechanism. The transformer compares the query vector with all key vectors to calculate attention scores, reflecting the relevance of each input part to the output. These scores are then adjusted using scaling and a softmax function to create a probability distribution representing attention weights.
Subsequently, the value vectors are multiplied by these weights to emphasize more critical inputs while downplaying less important ones. This process is akin to concentrating more on one conversation despite numerous background noises. The weighted values are then aggregated to form the output at that position in the sequence, which is either fed into subsequent layers or contributes to the final output. By enabling each output element to be influenced by the entire input sequence, the transformer incorporates the context, which is useful for complex tasks, such as understanding entire sentences or documents.
An attention layer (e.g., attention head) of the transformer determines parts of the input that are important and how these parts should influence the prediction tasks. In most cases, the K and V tensors for each attention layer are represented as [#Heads, Depth, Seq Length] and [#Heads, Seq Length, Depth], respectively. Each attention layer may include multiple heads, in which each head in an attention layer is an example of an independent attention mechanism. As such, each head may focus on different parts of the input data, capturing various aspects or features. By using multiple heads, transformers can capture a richer representation of the input data, leading to more powerful and nuanced understanding and generation of sequences. In conventional systems, all heads of an attention layer may share a common large set of K and V tensors, which can be less efficient for computation.
In some systems, each individual attention head is assigned its own pair of K and V tensors, instead of having a shared set of K and V tensors across all heads within the layer. For example, if an attention layer has, for instance, ten heads, the attention layer may be associated with ten separate K tensors and ten separate V tensors. Each of these tensors would be three-dimensional arrays with specific shapes tailored to their roles. The K tensors, which store elements that represent questions asked by the neural network, would have the shape [1, Depth, Sequence Length (Seq Length)], where depth represents a size of the representation and the sequence length represents a length of the input sequence. The V tensors, which hold the answers to the questions, would have the shape [1, Seq Length, Depth].
Managing tensor operations across multiple attention heads may increase a complexity of the transformer. Additionally, the multiple K and V tensors increase a number of inputs and outputs throughout the various operations and transformations of the transformer. Accordingly, the use of multiple K and V tensors increases processing time at both a central processing unit (CPU) and neural signal processor (NSP). For example, at the CPU, considerable effort may be expended in informing the NSP of data locations.
Various aspects of the present disclosure are directed to bundling individual key (K) and value (V) tensors for each attention head into a single K tensor and a single V tensor, respectively. This approach simplifies the structure so that each attention layer is paired with just one K tensor and one V tensor. To accommodate a four-dimensional (4D) tensor layout, which is designed to optimize processing efficiency, the shape of the K tensor is structured as [Number of Heads, 1, Depth, Seq Length], and the V tensor is similarly structured as [Number of Heads, 1, Seq Length, Depth]. The number of heads represents a number of attention heads in the attention layer.
Particular aspects of the subject matter described in this disclosure can be implemented to realize one or more of the following potential advantages. In some examples, the described techniques of K/V tensor bundling may reduce a number of K and V inputs and outputs throughout the various operations and transformations of the transformer, thereby reducing processor load and improving processor execution times.
The SOC 100 may also include additional processing blocks tailored to specific functions, such as a GPU 104, a DSP 106, a connectivity block 110, which may include fifth generation (5G) connectivity, fourth generation long term evolution (4G LTE) connectivity, Wi-Fi connectivity, USB connectivity, Bluetooth connectivity, and the like, and a multimedia processor 112 that may, for example, detect and recognize gestures. In one implementation, the NPU 108 is implemented in the CPU 102, DSP 106, and/or GPU 104. The SOC 100 may also include a sensor processor 114, image signal processors (ISPs) 116, and/or navigation module 120, which may include a global positioning system.
The SOC 100 may be based on an ARM, RISC-V (RISC-five), or any reduced instruction set computing (RISC) architecture. In aspects of the present disclosure, the instructions loaded into the general-purpose processor 102 may include code to bundle a set of key (K) tensors into a single K tensor associated with an attention layer of a neural network model; code to bundle a set of value (V) tensors into a single V tensor associated with the attention layer; code to process, via one or more application processors, the single K tensor and the single V tensor; and code to execute the neural network model based on processing the single K tensor and the single V tensor.
Deep learning architectures may perform an object recognition task by learning to represent inputs at successively higher levels of abstraction in each layer, thereby building up a useful feature representation of the input data. In this way, deep learning addresses a major bottleneck of traditional machine learning. Prior to the advent of deep learning, a machine learning approach to an object recognition problem may have relied heavily on human engineered features, perhaps in combination with a shallow classifier. A shallow classifier may be a two-class linear classifier, for example, in which a weighted sum of the feature vector components may be compared with a threshold to predict to which class the input belongs. Human engineered features may be templates or kernels tailored to a specific problem domain by engineers with domain expertise. Deep learning architectures, in contrast, may learn to represent features that are similar to what a human engineer might design, but through training. Furthermore, a deep network may learn to represent and recognize new types of features that a human might not have considered.
A deep learning architecture may learn a hierarchy of features. If presented with visual data, for example, the first layer may learn to recognize relatively simple features, such as edges, in the input stream. In another example, if presented with auditory data, the first layer may learn to recognize spectral power in specific frequencies. The second layer, taking the output of the first layer as input, may learn to recognize combinations of features, such as simple shapes for visual data or combinations of sounds for auditory data. For instance, higher layers may learn to represent complex shapes in visual data or words in auditory data. Still higher layers may learn to recognize common visual objects or spoken phrases.
Deep learning architectures may perform especially well when applied to problems that have a natural hierarchical structure. For example, the classification of motorized vehicles may benefit from first learning to recognize wheels, windshields, and other features. These features may be combined at higher layers in different ways to recognize cars, trucks, and airplanes.
Neural networks may be designed with a variety of connectivity patterns. In feed-forward networks, information is passed from lower to higher layers, with each neuron in a given layer communicating to neurons in higher layers. A hierarchical representation may be built up in successive layers of a feed-forward network, as described above. Neural networks may also have recurrent or feedback (also called top-down) connections. In a recurrent connection, the output from a neuron in a given layer may be communicated to another neuron in the same layer. A recurrent architecture may be helpful in recognizing patterns that span more than one of the input data chunks that are delivered to the neural network in a sequence. A connection from a neuron in a given layer to a neuron in a lower layer is called a feedback (or top-down) connection. A network with many feedback connections may be helpful when the recognition of a high-level concept may aid in discriminating the particular low-level features of an input.
The connections between layers of a neural network may be fully connected or locally connected.
One example of a locally connected neural network is a convolutional neural network.
One type of convolutional neural network is a deep convolutional network (DCN).
The DCN 200 may be trained with supervised learning. During training, the DCN 200 may be presented with an image, such as the image 226 of a speed limit sign, and a forward pass may then be computed to produce an output 222. The DCN 200 may include a feature extraction section and a classification section. Upon receiving the image 226, a convolutional layer 232 may apply convolutional kernels (not shown) to the image 226 to generate a first set of feature maps 218. As an example, the convolutional kernel for the convolutional layer 232 may be a 5×5 kernel that generates 28×28 feature maps. In the present example, because four different feature maps are generated in the first set of feature maps 218, four different convolutional kernels were applied to the image 226 at the convolutional layer 232. The convolutional kernels may also be referred to as filters or convolutional filters.
The first set of feature maps 218 may be subsampled by a max pooling layer (not shown) to generate a second set of feature maps 220. The max pooling layer reduces the size of the first set of feature maps 218. That is, a size of the second set of feature maps 220, such as 14×14, is less than the size of the first set of feature maps 218, such as 28×28. The reduced size provides similar information to a subsequent layer while reducing memory consumption. The second set of feature maps 220 may be further convolved via one or more subsequent convolutional layers (not shown) to generate one or more subsequent sets of feature maps (not shown).
In the example of
In the present example, the probabilities in the output 222 for “sign” and “60” are higher than the probabilities of the others of the output 222, such as “30,” “40,” “50,” “70,” “80,” “90,” and “100”. Before training, the output 222 produced by the DCN 200 may likely be incorrect. Thus, an error may be calculated between the output 222 and a target output. The target output is the ground truth of the image 226 (e.g., “sign” and “60”). The weights of the DCN 200 may then be adjusted so the output 222 of the DCN 200 is more closely aligned with the target output.
To adjust the weights, a learning algorithm may compute a gradient vector for the weights. The gradient may indicate an amount that an error would increase or decrease if the weight were adjusted. At the top layer, the gradient may correspond directly to the value of a weight connecting an activated neuron in the penultimate layer and a neuron in the output layer. In lower layers, the gradient may depend on the value of the weights and on the computed error gradients of the higher layers. The weights may then be adjusted to reduce the error. This manner of adjusting the weights may be referred to as “back propagation” as it involves a “backward pass” through the neural network.
In practice, the error gradient of weights may be calculated over a small number of examples, so that the calculated gradient approximates the true error gradient. This approximation method may be referred to as stochastic gradient descent. Stochastic gradient descent may be repeated until the achievable error rate of the entire system has stopped decreasing or until the error rate has reached a target level. After learning, the DCN 200 may be presented with new images (e.g., the speed limit sign of the image 226) and a forward pass through the DCN 200 may yield an output 222 that may be considered an inference or a prediction of the DCN 200.
Deep belief networks (DBNs) are probabilistic models comprising multiple layers of hidden nodes. DBNs may be used to extract a hierarchical representation of training data sets. A DBN may be obtained by stacking up layers of Restricted Boltzmann Machines (RBMs). An RBM is a type of artificial neural network that can learn a probability distribution over a set of inputs. Because RBMs can learn a probability distribution in the absence of information about the class to which each input should be categorized, RBMs are often used in unsupervised learning. Using a hybrid unsupervised and supervised paradigm, the bottom RBMs of a DBN may be trained in an unsupervised manner and may serve as feature extractors, and the top RBM may be trained in a supervised manner (on a joint distribution of inputs from the previous layer and target classes) and may serve as a classifier.
DCNs are networks of convolutional networks, configured with additional pooling and normalization layers. DCNs have achieved state-of-the-art performance on many tasks. DCNs can be trained using supervised learning in which both the input and output targets are known for many exemplars and are used to modify the weights of the network by use of gradient descent methods.
DCNs may be feed-forward networks. In addition, as described above, the connections from a neuron in a first layer of a DCN to a group of neurons in the next higher layer are shared across the neurons in the first layer. The feed-forward and shared connections of DCNs may be exploited for fast processing. The computational burden of a DCN may be much less, for example, than that of a similarly sized neural network that comprises recurrent or feedback connections.
The processing of each layer of a convolutional network may be considered a spatially invariant template or basis projection. If the input is first decomposed into multiple channels, such as the red, green, and blue channels of a color image, then the convolutional network trained on that input may be considered three-dimensional, with two spatial dimensions along the axes of the image and a third dimension capturing color information. The outputs of the convolutional connections may be considered to form a feature map in the subsequent layer, with each element of the feature map (e.g., 220) receiving input from a range of neurons in the previous layer (e.g., feature maps 218) and from each of the multiple channels. The values in the feature map may be further processed with a non-linearity, such as a rectification, max(0, x). Values from adjacent neurons may be further pooled, which corresponds to down sampling, and may provide additional local invariance and dimensionality reduction. Normalization, which corresponds to whitening, may also be applied through lateral inhibition between neurons in the feature map.
Although only two of the convolution blocks 354A, 354B are shown, the present disclosure is not so limiting, and instead, any number of the convolution blocks 354A, 354B may be included in the DCN 350 according to design preference.
The convolution layers 356 may include one or more convolutional filters, which may be applied to the input data to generate a feature map. The normalization layer 358 may normalize the output of the convolution filters. For example, the normalization layer 358 may provide whitening or lateral inhibition. The max pooling layer 360 may provide down sampling aggregation over space for local invariance and dimensionality reduction.
The parallel filter banks, for example, of a deep convolutional network may be loaded on a CPU 102 or GPU 104 of an SOC 100 (e.g.,
The DCN 350 may also include one or more fully connected layers 362 (FC1 and FC2). The DCN 350 may further include a logistic regression (LR) layer 364. Between each layer 356, 358, 360, 362, 364 of the DCN 350 are weights (not shown) that are to be updated. The output of each of the layers (e.g., 356, 358, 360, 362, 364) may serve as an input of a succeeding one of the layers (e.g., 356, 358, 360, 362, 364) in the DCN 350 to learn hierarchical feature representations from input data 352 (e.g., images, audio, video, sensor data and/or other input data) supplied at the first of the convolution blocks 354A. The output of the DCN 350 is a classification score 366 for the input data 352. The classification score 366 may be a set of probabilities, where each probability is the probability of the input data including a feature from a set of features.
The AI application 402 may be configured to call functions defined in a user space 404 that may, for example, provide an LLM, process an input for an LLM, or provide a generative AI application. The AI application 402 may make a request to compiled program code associated with a library defined in an AI function application programming interface (API) 406. This request may ultimately rely on the output of a deep neural network configured to provide an inference response based on video and positioning data, for example.
The run-time engine 408, which may be compiled code of a run-time framework, may be further accessible to the AI application 402. When caused to provide an inference response, the run-time engine 408 may in turn send a signal to an operating system in an operating system (OS) space 410, such as a Kernel 412, running on the SOC 420. In some examples, the Kernel 412 may be a LINUX Kernel. The operating system, in turn, may cause K/V tensor bundling to be performed on the CPU 422, the DSP 424, the GPU 426, the NPU 428, or some combination thereof. The CPU 422 may be accessed directly by the operating system, and other processing blocks may be accessed through a driver, such as a driver 414, 416, or 418 for, respectively, the DSP 424, the GPU 426, or the NPU 428. In the exemplary example, the deep neural network may be configured to run on a combination of processing blocks, such as the CPU 422, the DSP 424, and the GPU 426, or may be run on the NPU 428.
Large language models (LLMs) are examples of models that understand natural language. LLMs have gained popularity with respect to textual generative tasks as well as tasks that involve modeling information from textual and visual domains.
In many cases, an LLM uses a transformer (e.g., transformer model) based architecture to process sequences of data, particularly text. In an LLM, the transformer receives a sequence of tokens, which could be, for example, words, parts of words, or characters, and processes the tokens through its layers. As the data flows through each layer, the model learns increasingly complex representations of the input data. This allows the LLM to perform a wide range of language-related tasks, such as text completion, translation, summarization, and question-answering, with a high degree of fluency and coherence.
In an LLM, token generation begins with tokenization, where an input text is segmented into tokens, which could be complete words, subwords, or even individual characters, forming the basic units the model can interpret. These tokens are then transformed into numerical vectors through an embedding process, which captures and encodes the semantic significance of each token into a format the model can process.
Due to a transformer's lack of inherent sequence processing capability, positional encodings are added to these embeddings to provide the model with the sequence order information. The tokens are then input to the transformer's layers, each including a multi-head attention (MHA) and a feed-forward network. This MHA enables the model to weigh the importance of different parts of the input sequence selectively, refining its focus based on the relevance to the current processing point.
A final layer may generate a set of output vectors, each vector representing the model's contextual understanding. To predict the next token, the output vector for the current endpoint is passed through a dense layer equipped with a softmax activation. This produces a probability distribution over the entire vocabulary, indicating the likelihood of each token following the existing sequence. Choosing the subsequent token can be deterministic, using an ArgMax function to select the most probable token, or stochastic, introducing an element of randomness by sampling from the distribution. The process continues iteratively, with each newly predicted token being added to the sequence and the model re-engaging with the expanded input until a predefined stopping criterion is reached. This recursive process allows LLMs to generate text sequences that are contextually rich, coherent, and often strikingly similar to text produced by humans.
The first KV model 500 may receive input text 502. The input text 502 may include a system prompt and a user prompt. The system prompt may be a standard prompt that may, for instance, indicate a greeting and/or an instruction to a user for operating the model (e.g., “please enter a question”). The user prompt may comprise the user input such as a task for the LLM. The user prompt may have a variable length.
The input text 502 may be provided to a tokenizer 504. The tokenizer 504 may divide the input text 502 into multiple portions referred to as tokens 506. A token 506 may comprise a subpart of the input text 502, such as sequence of characters (e.g., a token may be on average about four characters in length), a word, or a phrase. In the first KV model 500, the tokenizer 504 processes the input text 502 that may include, but is not limited to, a word, a sentence, a paragraph, or a document, for example, and may generate all of the tokens 506 for the input text 502. Then, the tokenizer 504 may provide all of the tokens 506 to the LLM 512 at once.
A position embedding 510 may be applied to maintain information related to the order of the tokens 506. An attention mask 508 may also be applied to identify more salient tokens (e.g., 506) corresponding to the input text 502. In turn, the LLM 512 may generate a prediction of a single following token 514. The generated token 514 may be considered a completion, for example, a following word in a response or output. The LLM 512 may be configured to generate multiple tokens 514 in an autoregressive manner by writing the generated token 514 to memory, such as a KV tensor buffer 516 to retain the internal state KV$ (e.g., a data structure referred to as KV cache, which may represent keys (K) and values (V) of the previously generated token) 518. The internal state KV$ 518 may be read from the KV tensor buffer 516 and appended to the previous input tokens 506. Attention mask and position embeddings 520 may be updated and supplied as input to the internal state KV$ 518. As the LLM 512 generates each token 514, the generated token 514 may be processed by a detokenizer 524 in a reciprocal manner to the tokenizer 504 to generate output text 526 (e.g., a sequence of characters, words, or phrases). The process may continue to be repeated in this manner. With each iteration, the generated token 522 may update the internal state KV$ 518 and may be written to the KV tensor buffer 516 to update the KV tensor buffer 516, which may, in turn, be loaded as the following input.
In summary, the first KV model 500 shown with respect to
In transformers used for LLMs, keys (K) and values (V) are used to selectively focus on relevant parts of the input data. Each input item, such as a word in a sentence, is converted into a numerical vector. From each of these vectors, the model produces three new vectors: the query (Q), the key (K), and the value (V), each serving a distinct purpose in the attention mechanism. The transformer compares the query vector with all key vectors to calculate attention scores, reflecting the relevance of each input part to the output. These scores are then adjusted using scaling and a softmax function to create a probability distribution representing attention weights.
Subsequently, the value vectors are multiplied by these weights to emphasize more critical inputs while downplaying less important ones. This process is akin to concentrating more on one conversation despite numerous background noises. The weighted values are then aggregated to form the output at that position in the sequence, which is either fed into subsequent layers or contributes to the final output. By enabling each output element to be influenced by the entire input sequence, the transformer incorporates the context, which is useful for complex tasks, such as understanding entire sentences or documents.
An attention layer (e.g., attention head) of the transformer determines parts of the input that are important and how these parts should influence the prediction tasks. In most cases, the K and V tensors for each attention layer are represented as [#Heads, Depth, Seq Length] and [#Heads, Seq Length, Depth], respectively. Each attention layer may include multiple heads, in which each head in an attention layer is an example of an independent attention mechanism. As such, each head may focus on different parts of the input data, capturing various aspects or features. By using multiple heads, transformers can capture a richer representation of the input data, leading to more powerful and nuanced understanding and generation of sequences. In conventional systems, all heads of an attention layer may share multiple sets of K and V tensors, which can be less efficient for computation. In some systems, each individual attention head is assigned its own pair of K and V tensors, instead of having a shared set of K and V tensors across all heads within the layer.
Managing tensor operations across multiple attention heads may increase a complexity of the transformer. Additionally, the multiple K and V tensors increase a number of inputs and outputs throughout the various operations and transformations of the transformer. Accordingly, the use of multiple K and V tensors increases processing time at both a CPU and neural signal processor (NSP). For example, at the CPU, considerable effort may be expended in informing the NSP of data locations. The NSP may be an example of an NPU 108 described with reference to
It may be desirable to simplify how tensors are organized and accessed by hardware during computation. Various aspects of the present disclosure are directed to bundling individual key (K) and value (V) tensors for each attention head into a single K tensor and a single V tensor, respectively. This approach simplifies the structure so that each attention layer is paired with just one K tensor and one V tensor.
Aspects of the present disclosure add a single dimension, represented by a value of one, into the middle of the tensor's shape and aggregate the tensors along the outermost dimension. To accommodate a four-dimensional (4D) tensor layout, which is designed to optimize processing efficiency, the shape of the K tensor is structured as [Number of Heads, 1, Depth, Seq Length], and the V tensor is similarly structured as [Number of Heads, 1, Seq Length, Depth]. The number of heads represents a number of attention heads in the attention layer. The outermost dimension is particularly amenable to being split by the hardware because it corresponds to large, discrete jumps in memory. This reorganization of each tensor's shape aligns with the hardware's innate handling of data, making it easier for the hardware, such as the CPU, NSP, NPU, GPU, microprocessor unit (MPU), and/or other hardware to parse and process the information. In some examples, the reshaping of the tensors reduces a number of handshakes between the CPU and the NSP. Each handshake confirms a location of data in memory. This reduction in pre-computation overhead may decrease execution time in the transformer.
In most cases, the execution stack of an NPU is designed to work across different processors within a device. Initially, code is executed on an application processor, such as a CPU. After the initial computation, the data may be transferred, via a communication protocol designed for high-speed data transfer, to a specialized processor (e.g., digital signal processor (DSP), or GPU). In the absence of bundling, the process involves handling a substantial number of K and V tensor inputs and outputs for each computational graph associated with the NPU and CPU. Bundling reduces the number of inputs and outputs by reducing the number of K and V tensors processed and transferred by at least the NPU and CPU. This reduction in the number of tensors eases the processing load on at least an application processor, the NPU, and the specialized processor. The decrease in the processing load may reduce processing time. This decrease in time represents an efficiency gain in the execution of complex models, as less time is spent on the overhead of coordinating between the CPU and the specialized processor.
Batched inference is a process in machine learning where multiple inputs (e.g., batches of data) are simultaneously processed, rather than one by one. This approach can reduce the use of computational resources by leveraging parallel processing capabilities of hardware, such as a CPU and/or NPU. In some examples, bundling may be applied to batched inference. In such examples, an additional dimension may be specified for the batch size, which leads to a five-dimensional (5D) tensor structure.
An NPU, or other neural processing unit, may support tensors up to rank 5, meaning the processing unit can handle the complexity of the 5D tensors. By incorporating the batch dimension into the K/V tensor bundling, the tensor shapes may be adapted to accommodate multiple simultaneous inference requests. The K tensor takes the shape [Batch, Number of Heads, 1, Depth, Seq Length], where the batch parameter represents a number of data points being processed. The V tensor matches this with a shape of [Batch, Number of Heads, 1, Seq Length, Depth]. With this 5D approach, each batch of data can benefit from the efficiency of K/V tensor bundling while being concurrently processed.
Implementation examples are provided in the following numbered clauses.
The various operations of methods described above may be performed by any suitable means capable of performing the corresponding functions. The means may include various hardware and/or software component(s) and/or module(s), including, but not limited to, a circuit, an application specific integrated circuit (ASIC), or processor. Generally, where there are operations illustrated in the figures, those operations may have corresponding counterpart means-plus-function components with similar numbering.
As used, the term “determining” encompasses a wide variety of actions. For example, “determining” may include calculating, computing, processing, deriving, investigating, looking up (e.g., looking up in a table, a database, or another data structure), ascertaining and the like. Additionally, “determining” may include receiving (e.g., receiving information), accessing (e.g., accessing data in a memory) and the like. Furthermore, “determining” may include resolving, selecting, choosing, establishing, and the like.
As used, a phrase referring to “at least one of” a list of items refers to any combination of those items, including single members. As an example, “at least one of: a, b, or c” is intended to cover: a, b, c, a-b, a-c, b-c, and a-b-c.
The various illustrative logical blocks, modules and circuits described in connection with the present disclosure may be implemented or performed with a general-purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array signal (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components or any combination thereof designed to perform the functions described. A general-purpose processor may be a microprocessor, but in the alternative, the processor may be any commercially available processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.
The steps of a method or algorithm described in connection with the present disclosure may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in any form of storage medium that is known in the art. Some examples of storage media that may be used include random access memory (RAM), read only memory (ROM), flash memory, erasable programmable read-only memory (EPROM), electrically erasable programmable read-only memory (EEPROM), registers, a hard disk, a removable disk, a CD-ROM and so forth. A software module may comprise a single instruction, or many instructions, and may be distributed over several different code segments, among different programs, and across multiple storage media. A storage medium may be coupled to a processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor.
The methods disclosed comprise one or more steps or actions for achieving the described method. The method steps and/or actions may be interchanged with one another without departing from the scope of the claims. In other words, unless a specific order of steps or actions is specified, the order and/or use of specific steps and/or actions may be modified without departing from the scope of the claims.
The functions described may be implemented in hardware, software, firmware, or any combination thereof. If implemented in hardware, an example hardware configuration may comprise a processing system in a device. The processing system may be implemented with a bus architecture. The bus may include any number of interconnecting buses and bridges depending on the specific application of the processing system and the overall design constraints. The bus may link together various circuits including a processor, machine-readable media, and a bus interface. The bus interface may be used to connect a network adapter, among other things, to the processing system via the bus. The network adapter may be used to implement signal processing functions. For certain aspects, a user interface (e.g., keypad, display, mouse, joystick, etc.) may also be connected to the bus. The bus may also link various other circuits such as timing sources, peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further.
The processor may be responsible for managing the bus and general processing, including the execution of software stored on the machine-readable media. The processor may be implemented with one or more general-purpose and/or special-purpose processors. Examples include microprocessors, microcontrollers, DSP processors, and other circuitry that can execute software. Software shall be construed broadly to mean instructions, data, or any combination thereof, whether referred to as software, firmware, middleware, microcode, hardware description language, or otherwise. Machine-readable media may include, by way of example, random access memory (RAM), flash memory, read only memory (ROM), programmable read-only memory (PROM), erasable programmable read-only memory (EPROM), electrically erasable programmable Read-only memory (EEPROM), registers, magnetic disks, optical disks, hard drives, or any other suitable storage medium, or any combination thereof. The machine-readable media may be embodied in a computer-program product. The computer-program product may comprise packaging materials.
In a hardware implementation, the machine-readable media may be part of the processing system separate from the processor. However, as those skilled in the art will readily appreciate, the machine-readable media, or any portion thereof, may be external to the processing system. By way of example, the machine-readable media may include a transmission line, a carrier wave modulated by data, and/or a computer product separate from the device, all which may be accessed by the processor through the bus interface. Alternatively, or in addition, the machine-readable media, or any portion thereof, may be integrated into the processor, such as the case may be with cache and/or general register files. Although the various components discussed may be described as having a specific location, such as a local component, they may also be configured in various ways, such as certain components being configured as part of a distributed computing system.
The processing system may be configured as a general-purpose processing system with one or more microprocessors providing the processor functionality and external memory providing at least a portion of the machine-readable media, all linked together with other supporting circuitry through an external bus architecture. Alternatively, the processing system may comprise one or more neuromorphic processors for implementing the neuron models and models of neural systems described. As another alternative, the processing system may be implemented with an application specific integrated circuit (ASIC) with the processor, the bus interface, the user interface, supporting circuitry, and at least a portion of the machine-readable media integrated into a single chip, or with one or more field programmable gate arrays (FPGAs), programmable logic devices (PLDs), controllers, state machines, gated logic, discrete hardware components, or any other suitable circuitry, or any combination of circuits that can perform the various functionality described throughout this disclosure. Those skilled in the art will recognize how best to implement the described functionality for the processing system depending on the particular application and the overall design constraints imposed on the overall system.
The machine-readable media may comprise a number of software modules. The software modules include instructions that, when executed by the processor, cause the processing system to perform various functions. The software modules may include a transmission module and a receiving module. Each software module may reside in a single storage device or be distributed across multiple storage devices. By way of example, a software module may be loaded into RAM from a hard drive when a triggering event occurs. During execution of the software module, the processor may load some of the instructions into cache to increase access speed. One or more cache lines may then be loaded into a general register file for execution by the processor. When referring to the functionality of a software module below, it will be understood that such functionality is implemented by the processor when executing instructions from that software module. Furthermore, it should be appreciated that aspects of the present disclosure result in improvements to the functioning of the processor, computer, machine, or other system implementing such aspects.
If implemented in software, the functions may be stored or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media include both computer storage media and communication media including any medium that facilitates transfer of a computer program from one place to another. A storage medium may be any available medium that can be accessed by a computer. By way of example, and not limitation, such computer-readable media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Additionally, any connection is properly termed a computer-readable medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared (IR), radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio, and microwave are included in the definition of medium. Disk and disc, as used, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and Blu-ray® disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Thus, in some aspects, computer-readable media may comprise non-transitory computer-readable media (e.g., tangible media). In addition, for other aspects computer-readable media may comprise transitory computer-readable media (e.g., a signal). Combinations of the above should also be included within the scope of computer-readable media.
Thus, certain aspects may comprise a computer program product for performing the operations presented. For example, such a computer program product may comprise a computer-readable medium having instructions stored (and/or encoded) thereon, the instructions being executable by one or more processors to perform the operations described. For certain aspects, the computer program product may include packaging material.
Further, it should be appreciated that modules and/or other appropriate means for performing the methods and techniques described can be downloaded and/or otherwise obtained by a user terminal and/or base station as applicable. For example, such a device can be coupled to a server to facilitate the transfer of means for performing the methods described. Alternatively, various methods described can be provided via storage means (e.g., RAM, ROM, a physical storage medium such as a compact disc (CD) or floppy disk, etc.), such that a user terminal and/or base station can obtain the various methods upon coupling or providing the storage means to the device. Moreover, any other suitable technique for providing the methods and techniques described to a device can be utilized.
It is to be understood that the claims are not limited to the precise configuration and components illustrated above. Various modifications, changes, and variations may be made in the arrangement, operation, and details of the methods and apparatus described above without departing from the scope of the claims.