Long short-term memory (LSTM) networks have been widely used for speech recognition, language modeling, sentiment analysis, text prediction, and other applications. A LSTM network includes one or more LSTM layers. Each LSTM layer can include an input gate, a forget gate, a memory block, an output gate, and one or more non-linear cells. The input gate and the forget gate control the information flow into and out of the memory block. The output gate controls how much information from the memory block is passed to an output response. By algorithm, LSTM performance is limited by matrix multiplication operation throughput, coefficient read bandwidth, and data read bandwidth of a single sequence of input data. LSTM processing is composed of matrix multiplication units with a specific matrix configuration and LSTM cell processing. Because of the specific matrix configuration (i.e., one row and one column can be processed at a time), the matrix multiplication processing is implemented in an inefficient manner. This results in under-utilization of the multipliers and accumulators.
The advantages of the methods and mechanisms described herein may be better understood by referring to the following description in conjunction with the accompanying drawings, in which:
In the following description, numerous specific details are set forth to provide a thorough understanding of the methods and mechanisms presented herein. However, one having ordinary skill in the art should recognize that the various implementations may be practiced without these specific details. In some instances, well-known structures, components, signals, computer program instructions, and techniques have not been shown in detail to avoid obscuring the approaches described herein. It will be appreciated that for simplicity and clarity of illustration, elements shown in the figures have not necessarily been drawn to scale. For example, the dimensions of some of the elements may be exaggerated relative to other elements. In this document, sequences can be alternatively referred to as batches in machine learning terminology, wherein the batches use the same set of weight coefficients for forward long short-term memory (LSTM) inference.
Various systems, apparatuses, and methods for implementing a low latency long short-term memory (LSTM) machine learning engine using sequence interleaving techniques are disclosed herein. In one implementation, a computing system includes at least a host processing unit, a machine learning engine, and a memory subsystem. The host processing unit detects a plurality of sequences which will be processed by the machine learning engine. The host processing unit interleaves the sequences into data blocks and stores the data blocks in the memory. When the machine learning engine receives a given data block, the machine learning engine performs, in parallel, a plurality of matrix multiplication operations on the plurality of sequences in the given data block and a plurality of coefficients. Then, the outputs of the matrix multiplication operations are coupled to one or more LSTM layers. Any of various applications can implement the low latency sequence interleaving techniques described herein. For example, forward inference of speech modeling, image labeling, inference server, artificial intelligence gaming applications, and other applications are able to implement these techniques and achieve a factor of N speedup, wherein N is the number of sequences or batches which are interleaved.
Referring now to
In various implementations, machine learning engine 105 includes logic for implementing any of various machine learning algorithms or machine learning models. In one implementation, machine learning engine 105 implements one or more layers of a recurrent neural network. For example, in this implementation, machine learning engine 105 implements one or more matrix multiplication layers and one or more LSTM layers. In another implementation, machine learning engine 105 implements one or more layers of a convolutional neural network. In other implementations, machine learning engine 105 executes other types of machine learning models.
Processor 110 detects multiple separate and independent sequences in memory device(s) 130 that will be processed by machine learning engine 105. In response to detecting the separate sequences that will be processed by machine learning engine 105, processor 110 interleaves the separate sequences together into a single interleaved multi-sequence data stream. In one implementation, the samples from independent sequences (or batches) are fetched from external system memory device(s) 130 from different locations and interleaved within multi-sample words and copied into local memory (not shown) of machine learning engine 105. Then, machine learning engine 105 fetches the multi-sample words with interleaved sequences from the local memory and processes the multi-sample words much more efficiently than previously possible with conventional methods. In one implementation, processing the multi-sample words involves performing matrix multiplication on the multi-sample words with a plurality of coefficients, wherein the plurality of coefficients are stored in an N×(N+M) matrix, wherein N and M are positive integers greater than one. An example of interleaving batches within multi-sample words and copying the multi-sample words into local memory is described further below in the discussion regarding
In one implementation, machine learning engine 105 implements a trained neural network. For example, in this implementation, machine learning engine 105 analyzes a video frame to generate one or more label probabilities for the video frame. For example, potential use cases include at least eye tracking, object recognition, point cloud estimation, ray tracing, light field modeling, depth tracking, and others. For eye tracking use cases, probabilities generated by machine learning engine 105 are based on learned patterns, dwell, transition angles, blink, etc. In other implementations, machine learning engine 105 is customized for other types of use cases. For example, in these implementations, machine learning engine 105 is customized for speech recognition, language modeling, sentiment analysis, text prediction, and/or other applications. In further implementations, machine learning engine 105 executes other types of software models or algorithms besides machine learning models.
Processors(s) 110 are representative of any number and type of processing units (e.g., central processing unit (CPU), graphics processing unit (GPU), digital signal processor (DSP), field programmable gate array (FPGA), application specific integrated circuit (ASIC)). In one implementation, some of the processing associated with the model implemented by machine learning engine 105 is performed by processor(s) 110. Memory device(s) 130 are representative of any number and type of memory devices. For example, the type of memory in memory device(s) 130 can include Dynamic Random Access Memory (DRAM), Static Random Access Memory (SRAM), NAND Flash memory, NOR flash memory, Ferroelectric Random Access Memory (FeRAM), or others. Memory device(s) 130 are accessible by machine learning engine 105 and processor(s) 110. I/O interfaces 120 are representative of any number and type of I/O interfaces (e.g., peripheral component interconnect (PCI) bus, PCI-Extended (PCI-X), PCIE (PCI Express) bus, gigabit Ethernet (GBE) bus, universal serial bus (USB)). Various types of peripheral devices can be coupled to I/O interfaces 120. Such peripheral devices include (but are not limited to) displays, keyboards, mice, printers, scanners, joysticks or other types of game controllers, media recording devices, external storage devices, network interface cards, and so forth.
In various implementations, computing system 100 is a computer, laptop, mobile device, game console, server, streaming device, wearable device, or any of various other types of computing systems or devices. It is noted that the number of components of computing system 100 varies from implementation to implementation. For example, in other implementations, there are more or fewer of each component than the number shown in
Turning now to
Referring now to
Turning now to
In parallel with coefficients 405A-D being retrieved from memory, sequence data 410 is coupled to data matrix 430. In one implementation, a maximum of 32 coefficients can be read and provided to flop stages 415 per clock cycle and a maximum of 32 samples can be read and provided to data matrix 430 per clock cycle. Coefficients are coupled from coefficient matrix 420 to multiplier and accumulator units 440 to be multiplied by sequence data coupled from data matrix 430. The outputs of multiplier and accumulator units 440 are coupled to LSTM cell 450. Since the coefficient matrix 420 is an N×M matrix and the sequence data is an M×1 matrix, the matrix multiplication is not able to be parallelized. Hence, a single set of eight samples of sequence data is multiplied by the coefficients in a single cycle by multiplier and accumulator units 440, resulting in an underutilization of resources.
Referring now to
Matrix multiplication pipeline 500 allow for four times the throughput as compared to the matrix multiplication pipeline 400 (of
Turning now to
A processing unit detects a plurality of sequences that will be processed by a machine learning engine (block 605). In response to detecting the plurality of sequences that will be processed by the machine learning engine, the processing unit interleaves the plurality of sequences together into data blocks, wherein each data block includes samples from the plurality of sequences (block 610). Next, the machine learning engine receives a given data block of the plurality of data blocks (block 615). In one implementation, the machine learning engine retrieves the given data block from memory. In another implementation, the processing unit conveys the given data block to the machine learning engine. In a further implementation, the given data block is supplied to the machine learning engine via a processing pipeline.
Then, the machine learning engine performs, in parallel, a plurality of matrix multiplication operations on a plurality of sequences from the given data block and a plurality of coefficients (block 620). Next, the machine learning engine conveys outputs from the plurality of matrix multiplication units to the one or more LSTM layers (block 625). If there are more data blocks to process (conditional block 630, “yes” leg), then method 600 returns to block 615. It is noted that the machine learning engine is able to process subsequent data blocks in back-to-back clock cycles. In other words, in one implementation, the data blocks are processed by the machine learning engine in a pipelined fashion. Otherwise, if there are no more data blocks to process (conditional block 630, “no” leg), then method 600 ends.
Referring now to
Next, the processing unit conveys the input data in the second format to the machine learning engine (block 720). In one implementation, the processing unit stores the input data in the memory after converting the input data to the second format. The input data is then conveyed from the memory to the machine learning engine. In another implementation, rather than storing the input data back into memory in the second format, the processing unit converts the input data to the second format in an inline fashion and then provides the input data in the second format to the machine learning engine. After block 720, the machine learning engine processes the input data in the second format to implement a machine learning model (block 725). After block 725, method 700 ends. By processing the input data in the second format, the machine learning engine is able to execute the model more quickly and more efficiently (i.e., with lower power consumption) than if the input data were processed in the first format.
Turning now to
Referring now to
In various implementations, program instructions of a software application are used to implement the methods and/or mechanisms described herein. For example, program instructions executable by a general or special purpose processor are contemplated. In various implementations, such program instructions are represented by a high level programming language. In other implementations, the program instructions are compiled from a high level programming language to a binary, intermediate, or other form. Alternatively, program instructions are written that describe the behavior or design of hardware. Such program instructions are represented by a high-level programming language, such as C. Alternatively, a hardware design language (HDL) such as Verilog is used. In various implementations, the program instructions are stored on any of a variety of non-transitory computer readable storage mediums. The storage medium is accessible by a computing system during use to provide the program instructions to the computing system for program execution. Generally speaking, such a computing system includes at least one or more memories and one or more processors configured to execute program instructions.
It should be emphasized that the above-described implementations are only non-limiting examples of implementations. Numerous variations and modifications will become apparent to those skilled in the art once the above disclosure is fully appreciated. It is intended that the following claims be interpreted to embrace all such variations and modifications.
Number | Name | Date | Kind |
---|---|---|---|
4873630 | Rusterholz et al. | Oct 1989 | A |
5657420 | Jacobs et al. | Aug 1997 | A |
6067287 | Chung-Ju et al. | May 2000 | A |
8131660 | Davis et al. | Mar 2012 | B2 |
8369595 | Derakhshani et al. | Feb 2013 | B1 |
9319137 | Zhuge et al. | Apr 2016 | B2 |
9430735 | Vali et al. | Aug 2016 | B1 |
10078794 | Pierce et al. | Sep 2018 | B2 |
10740674 | Ambrose et al. | Aug 2020 | B2 |
10776684 | Agarwal et al. | Sep 2020 | B1 |
20020152441 | Senda et al. | Oct 2002 | A1 |
20060031652 | Richter et al. | Feb 2006 | A1 |
20070030926 | Brown et al. | Feb 2007 | A1 |
20110078342 | Siddabathuni et al. | Mar 2011 | A1 |
20110296212 | Elnozahy et al. | Dec 2011 | A1 |
20150178246 | Herrero Abellanas et al. | Jun 2015 | A1 |
20150242322 | Vajapeyam | Aug 2015 | A1 |
20150358755 | Luo et al. | Dec 2015 | A1 |
20160062294 | Murashima | Mar 2016 | A1 |
20160179434 | Herrero Abellanas et al. | Jun 2016 | A1 |
20160259994 | Ravindran et al. | Sep 2016 | A1 |
20160350645 | Brothers | Dec 2016 | A1 |
20160379109 | Chung et al. | Dec 2016 | A1 |
20170124451 | Barham et al. | May 2017 | A1 |
20170286864 | Fiedel et al. | Oct 2017 | A1 |
20170316312 | Goyal et al. | Nov 2017 | A1 |
20170344882 | Ambrose et al. | Nov 2017 | A1 |
20180032859 | Park et al. | Feb 2018 | A1 |
20180032867 | Son et al. | Feb 2018 | A1 |
20180046900 | Dally et al. | Feb 2018 | A1 |
20180082212 | Faivishevsky et al. | Mar 2018 | A1 |
20180089087 | Chang et al. | Mar 2018 | A1 |
20180096226 | Miabadi et al. | Apr 2018 | A1 |
20180174036 | Han | Jun 2018 | A1 |
20180189641 | Boesch et al. | Jul 2018 | A1 |
20180218303 | Cole et al. | Aug 2018 | A1 |
20180262291 | Doster et al. | Sep 2018 | A1 |
20180365558 | Sekiyama et al. | Dec 2018 | A1 |
20190026237 | Talpes et al. | Jan 2019 | A1 |
20190028752 | Zhang et al. | Jan 2019 | A1 |
20190205745 | Sridharan et al. | Jul 2019 | A1 |
20190266015 | Chandra et al. | Aug 2019 | A1 |
20190324755 | Herr et al. | Oct 2019 | A1 |
20190324759 | Yang et al. | Oct 2019 | A1 |
20190325296 | Powers | Oct 2019 | A1 |
20200258223 | Yip et al. | Aug 2020 | A1 |
Number | Date | Country |
---|---|---|
3098762 | Nov 2016 | EP |
2014203135 | Oct 2014 | JP |
2016033806 | Mar 2016 | JP |
2017151604 | Aug 2017 | JP |
2017003887 | Jan 2017 | WO |
Entry |
---|
Non-Final Office Action in U.S. Appl. No. 15/812,336, dated Oct. 14, 2020, 20 pages. |
Notice of Allowance in U.S. Appl. No. 16/234,956, dated May 5, 2020, 10 pages. |
Non-Final Office Action in U.S. Appl. No. 15/657,613, dated Oct. 5, 2018, 12 pages. |
Lagudu et al., U.S. Appl. No. 15/812,336, entitled “Memory Bandwidth Reduction Techniques for Low Power Convolutional Neural Network Inference Applications”, filed Nov. 14, 2017, 41 pages. |
Zhang et al., U.S. Appl. No. 16/117,302, entitled “Machine Learning Inference Engine Scalability”, filed Aug. 30, 2018, 32 pages. |
Zhang et al., U.S. Appl. No. 16/234,956, entitled “Tiling Format for Convolutional Neural Networks”, filed Dec. 28, 2018, 42 pages. |
Wang et al., U.S. Appl. No. 16/367,093, entitled “Auto Generation and Tuning Tool for Convolution Kernels”, filed Mar. 27, 2019, 32 pages. |
Final Office Action in U.S. Appl. No. 15/657,613, dated Mar. 8, 2019, 10 pages. |
International Search Report and Written Opinion in International Application No. PCT/US2018/052358, dated Feb. 18, 2019, 13 pages. |
Cecconi et al., “Optimal Tiling Strategy for Memory Bandwidth Reduction for CNNs”, International Conference on Advanced Concepts for Intelligent Vision Systems, Sep. 18, 2017, pp. 89-100. |
Fan et al., “F-C3D: FPGA-based 3-Dimensional Convolutional Neural Network”, 27th International Conference on Field Programmable Logic and Applications (FPL), Sep. 4, 2017, 4 pages. |
Rahman et al., “Efficient FPGA Acceleration of Convolutional Neural Networks Using Logical-3D Compute Array”, Proceedings of the 2016 Conference on Design, Automation & Test in Europe, Mar. 14, 2016, pp. 1393-1398. |
Non-Final Office Action in U.S. Appl. No. 16/117,302, dated Jan. 12, 2022, 66 pages. |
Lin et al., “Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training”, ICLR 2018 Conference, Dec. 5, 2017, 13 pages. |
Non-Final Office Action in U.S. Appl. No. 17/006,533, dated Jan. 27, 2022, 12 pages. |
First Examination Report in Indian Patent Application No. 202017020013, dated Mar. 28, 2022, 7 pages. |
Non-Final Office Action in U.S. Appl. No. 16/367,093, dated May 19, 2022, 15 pages. |
Notice of Allowance in U.S. Appl. No. 17/006,533, dated Jul. 5, 2022, 8 pages. |
Non-Final Office Action in Japanese Patent Application No. 2020-544323, dated Aug. 23, 2022, 9 pages. |
Final Office Action in U.S. Appl. No. 16/117,302, dated Sep. 14, 2022, 74 pages. |
Li et al., “A High Performance FPGA-Based Accelerator for Large-Scale Convolutional Neural Networks” 2016 26th International Conference on Field Programmable Logic and Applications (FPL), 2016, 9 pages. |
Non-Final Office Action in U.S. Appl. No. 18/050,939, dated Mar. 29, 2023, 13 pages. |
Final Office Action in U.S. Appl. No. 16/367,093, dated Dec. 16, 2022, 17 pages. |
Advisory Action in U.S. Appl. No. 16/117,302, dated Feb. 28, 2023, 4 pages. |
Number | Date | Country | |
---|---|---|---|
20200134432 A1 | Apr 2020 | US |