The present disclosure relates a circuit for performing convolution neural network and more specifically to systems and methods for dynamically shaping and segmenting work units in a neural network processor.
An artificial neural network (ANN) is a computing system or model that uses a collection of connected nodes to process input data. The ANN is typically organized into layers where different layers perform different types of transformation on their input. Extensions or variants of ANN such as convolution neural network (CNN), recurrent neural networks (RNN) and deep belief networks (DBN) have come to receive much attention. These computing systems or models often involve extensive computing operations including multiplication and accumulation. For example, CNN is a class of machine learning technique that primarily uses convolution between input data and kernel data, which can be decomposed into multiplication and accumulation operations.
Depending on the types of input data and operations to be performed, these machine learning systems or models can be configured differently. Such varying configuration would include, for example, pre-processing operations, number of channels in input data, kernel data to be used, non-linear function to be applied to convolution result, and applying of various post processing operations. Using a central processing unit (CPU) and its main memory to instantiate and execute machine learning systems or models of various configuration is relatively easy because such systems or models can be instantiated with mere updates to code. However, relying solely on the CPU for various operations of these machine learning systems or models would consume significant bandwidth of a central processing unit (CPU) as well as increase the overall power consumption.
Embodiments relate to a neural processor circuit including multiple neural engine circuits, a data buffer, and a kernel fetcher circuit. The neural engine circuits are configured to perform convolution operations on at least a work unit of input data and kernel data. The data buffer is placed between the neural engine circuits and a system memory external to the neural processor circuit. The data buffer stores at least a portion of the input data received from the system memory for sending to the neural engine circuits. The portion of the input data includes the work unit of the input data. The data buffer further stores output data received from the neural engine circuits. The kernel fetcher circuit is placed between the neural engine circuits and the system memory. The kernel fetcher circuit receives one or more kernels from the system memory, and sends a corresponding kernel to the neural engine circuits.
At least one of the neural engine circuits is configured to receive multiple sub-channels of the portion of the input data from the data buffer. The at least one neural engine circuit further receives a kernel of the one or more kernels from the kernel fetcher circuit, wherein the kernel was decomposed into a corresponding sub-kernel for each sub-channel of the portion of the input data. In one embodiment, the at least one neural engine circuit performs a convolution operation on each sub-channel of the portion of the input data and the corresponding sub-kernel. The at least one neural engine circuit accumulates corresponding outputs of each sub-channel portion of the convolution operation to generate a single channel of the output data.
The figures depict, and the detail description describes, various non-limiting embodiments for purposes of illustration only.
Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the various described embodiments. However, the described embodiments may be practiced without these specific details. In other instances, well-known methods, procedures, components, circuits, and networks have not been described in detail so as not to unnecessarily obscure aspects of the embodiments.
Embodiments of the present disclosure relate to performing convolution with input stride reduction, transposed convolution with output stride expansion, large kernel mode convolution operations, or convolution on small patches of input data. The input stride allows the convolution to skip input samples in a work unit of input data, which reduces resolution of output data compared to the input data. The output stride expansion is an inverse operation to the input stride reduction, which is used when running, e.g., an input-strided convolution backwards with a transposed kernel. Large kernel mode allows for utilizing kernels of sizes effectively doubled for both convolution and transposed convolution. Furthermore, convolution on small patches of input data implemented as presented herein increases utilization of a neural processor circuit when performing terminal segments of convolution neural network operations.
A processing cycle described herein refers to a time period for sending a work unit to a neural processing circuit and then performing a multiply-add operation on the work unit in a neural engine circuit of the neural processing circuit.
Embodiments of electronic devices, user interfaces for such devices, and associated processes for using such devices are described. In some embodiments, the device is a portable communications device, such as a mobile telephone, that also contains other functions, such as personal digital assistant (PDA) and/or music player functions. Exemplary embodiments of portable multifunction devices include, without limitation, the iPhone®, iPod Touch®, Apple Watch®, and iPad® devices from Apple Inc. of Cupertino, Calif. Other portable electronic devices, such as wearables, laptops or tablet computers, are optionally used. In some embodiments, the device is not a portable communications device, but is a desktop computer or other computing device that is not designed for portable use. In some embodiments, the disclosed electronic device may include a touch sensitive surface (e.g., a touch screen display and/or a touch pad). An example electronic device described below in conjunction with
In some embodiments, device 100 includes touch screen 150, menu button 104, push button 106 for powering the device on/off and locking the device, volume adjustment buttons 108, Subscriber Identity Module (SIM) card slot 110, head set jack 112, and docking/charging external port 124. Push button 106 may be used to turn the power on/off on the device by depressing the button and holding the button in the depressed state for a predefined time interval; to lock the device by depressing the button and releasing the button before the predefined time interval has elapsed; and/or to unlock the device or initiate an unlock process. In an alternative embodiment, device 100 also accepts verbal input for activation or deactivation of some functions through microphone 113. The device 100 includes various components including, but not limited to, a memory (which may include one or more computer readable storage mediums), a memory controller, one or more central processing units (CPUs), a peripherals interface, an RF circuitry, an audio circuitry, speaker 111, microphone 113, input/output (I/O) subsystem, and other input or control devices. Device 100 may include one or more image sensors 164, one or more proximity sensors 166, and one or more accelerometers 168. The device 100 may include components not shown in
Device 100 is only one example of an electronic device, and device 100 may have more or fewer components than listed above, some of which may be combined into a components or have a different configuration or arrangement. The various components of device 100 listed above are embodied in hardware, software, firmware or a combination thereof, including one or more signal processing and/or application specific integrated circuits (ASICs).
Image sensor 202 is a component for capturing image data and may be embodied, for example, as a complementary metal-oxide-semiconductor (CMOS) active-pixel sensor) a camera, video camera, or other devices. Image sensor 202 generates raw image data that is sent to SOC component 204 for further processing. In some embodiments, the image data processed by SOC component 204 is displayed on display 216, stored in system memory 230, persistent storage 228 or sent to a remote computing device via network connection. The raw image data generated by image sensor 202 may be in a Bayer color kernel array (CFA) pattern (hereinafter also referred to as “Bayer pattern”).
Motion sensor 234 is a component or a set of components for sensing motion of device 100. Motion sensor 234 may generate sensor signals indicative of orientation and/or acceleration of device 100. The sensor signals are sent to SOC component 204 for various operations such as turning on device 100 or rotating images displayed on display 216.
Display 216 is a component for displaying images as generated by SOC component 204. Display 216 may include, for example, liquid crystal display (LCD) device or an organic light emitting diode (OLED) device. Based on data received from SOC component 204, display 116 may display various images, such as menus, selected operating parameters, images captured by image sensor 202 and processed by SOC component 204, and/or other information received from a user interface of device 100 (not shown).
System memory 230 is a component for storing instructions for execution by SOC component 204 and for storing data processed by SOC component 204. System memory 230 may be embodied as any type of memory including, for example, dynamic random access memory (DRAM), synchronous DRAM (SDRAM), double data rate (DDR, DDR2, DDR3, etc.) RAMBUS DRAM (RDRAM), static RAM (SRAM) or a combination thereof. In some embodiments, system memory 230 may store pixel data or other image data or statistics in various formats.
Persistent storage 228 is a component for storing data in a non-volatile manner. Persistent storage 228 retains data even when power is not available. Persistent storage 228 may be embodied as read-only memory (ROM), flash memory or other non-volatile random access memory devices.
SOC component 204 is embodied as one or more integrated circuit (IC) chip and performs various data processing processes. SOC component 204 may include, among other subcomponents, image signal processor (ISP) 206, a central processor unit (CPU) 208, a network interface 210, sensor interface 212, display controller 214, neural processor circuit 218, graphics processor (GPU) 220, memory controller 222, video encoder 224, storage controller 226, and bus 232 connecting these subcomponents. SOC component 204 may include more or fewer subcomponents than those shown in
ISP 206 is hardware that performs various stages of an image processing pipeline. In some embodiments, ISP 206 may receive raw image data from image sensor 202, and process the raw image data into a form that is usable by other subcomponents of SOC component 204 or components of device 100. ISP 206 may perform various image-manipulation operations such as image translation operations, horizontal and vertical scaling, color space conversion and/or image stabilization transformations, as described below in detail with reference to
CPU 208 may be embodied using any suitable instruction set architecture, and may be configured to execute instructions defined in that instruction set architecture. CPU 208 may be general-purpose or embedded processors using any of a variety of instruction set architectures (ISAs), such as the x86, PowerPC, SPARC, RISC, ARM or MIPS ISAs, or any other suitable ISA. Although a single CPU is illustrated in
Graphics processing unit (GPU) 220 is graphics processing circuitry for performing graphical data. For example, GPU 220 may render objects to be displayed into a frame buffer (e.g., one that includes pixel data for an entire frame). GPU 220 may include one or more graphics processors that may execute graphics software to perform a part or all of the graphics operation, or hardware acceleration of certain graphics operations.
Neural processor circuit 218 is a circuit that performs various machine learning operations based on computations including multiplication, adding and accumulation. Such computations may be arranged to perform, for example, convolution of input data and kernel data. Neural processor circuit 218 is a configurable circuit that performs these operations in a fast and power-efficient manner while relieving CPU 208 of resource-intensive operations associated with neural network operations. Neural processor circuit 218 may receive the input data from sensor interface 302, the image signal processor 206, system memory 230 or other sources such as network interface 210 or GPU 220. The output of neural processor circuit 218 may be provided to various components of device 100 such as the image signal processor 206, system memory 230 or CPU 208 for various operations. The structure and operation of neural processor circuit 218 is described below in detail with reference to
Network interface 210 is a subcomponent that enables data to be exchanged between devices 100 and other devices via one or more networks (e.g., carrier or agent devices). For example, video or other image data may be received from other devices via network interface 210 and be stored in system memory 230 for subsequent processing (e.g., via a back-end interface to image signal processor 206, such as discussed below in
Sensor interface 212 is circuitry for interfacing with motion sensor 234. Sensor interface 212 receives sensor information from motion sensor 234 and processes the sensor information to determine the orientation or movement of the device 100.
Display controller 214 is circuitry for sending image data to be displayed on display 216. Display controller 214 receives the image data from ISP 206, CPU 208, graphic processor or system memory 230 and processes the image data into a format suitable for display on display 216.
Memory controller 222 is circuitry for communicating with system memory 230. Memory controller 222 may read data from system memory 230 for processing by ISP 206, CPU 208, GPU 220 or other subcomponents of SOC component 204. Memory controller 222 may also write data to system memory 230 received from various subcomponents of SOC component 204.
Video encoder 224 is hardware, software, firmware or a combination thereof for encoding video data into a format suitable for storing in persistent storage 128 or for passing the data to network interface w10 for transmission over a network to another device.
In some embodiments, one or more subcomponents of SOC component 204 or some functionality of these subcomponents may be performed by software components executed on ISP 206, CPU 208 or GPU 220. Such software components may be stored in system memory 230, persistent storage 228 or another device communicating with device 100 via network interface 210.
Image data or video data may flow through various data paths within SOC component 204. In one example, raw image data may be generated from the image sensor 202 and processed by ISP 206, and then sent to system memory 230 via bus 232 and memory controller 222. After the image data is stored in system memory 230, it may be accessed by video encoder 224 for encoding or by display 116 for displaying via bus 232.
Neural processor circuit 218 is a configurable circuit that performs neural network operations on the input data based at least on kernel data 340. For this purpose, neural processor circuit 218 may include, among other components, neural task manager 310, a plurality of neural engines 314A through 314N (hereinafter collectively referred as “neural engines 314” and individually also referred to as “neural engine 314”), kernel direct memory access (DMA) 324, data buffer 318 and buffer DMA 320. Neural processor circuit 218 may include other components not illustrated in
Each of neural engines 314 performs computing operations for neural network operations in parallel. Depending on the load of operation, entire set of neural engines 314 may be operated or only a subset of the neural engines 314 may be operated while the remaining neural engines 314 are placed in a power save mode to conserve power. Each of neural engines 314 includes components for storing one or more kernels, for performing multiply-accumulate operations, and for post-processing to generate an output data 328, as described below in detail with reference to
Neural task manager 310 manages the overall operation of neural processor circuit 218. Neural task manager 310 may receive a task list from a compiler executed by CPU 208, store tasks in its task queues, choose a task to perform, and send instructions to other components of the neural processor circuit 218 for performing the chosen task. Neural task manager 310 may also perform switching of tasks on detection of events such as receiving instructions from CPU 208. In one or more embodiments, the neural task manager 310 sends rasterizer information to the components of the neural processor circuit 218 to enable each of the components to track, retrieve or process appropriate portions of the input data and kernel data, as described below in detail with reference to
Kernel DMA 324 is a read circuit that fetches kernel data from a source (e.g., system memory 230) and sends kernel data 326A through 326N to each of the neural engines 314. Kernel data represents information from which kernel elements can be extracted. In one embodiment, the kernel data may be in a compressed format which is decompressed at each of neural engines 314. Although kernel data provided to each of neural engines 314 may be the same in some instances, the kernel data provided to each of neural engines 314 is different in most instances.
Data buffer 318 is a temporary storage for storing data associated with the neural network operations. In one embodiment, data buffer 318 is embodied as a memory that can be accessed by all of the neural engines 314. Data buffer 318 may store input data 322A through 322N for feeding to corresponding neural engines 314A through 314N, as well as output from each of neural engines 314A through 314N for feeding back into neural engines 314 or sending to a target circuit (e.g., system memory 230). The operations of data buffer 318 and other components of the neural processor circuit 218 are coordinated so that the input data and intermediate data stored in the data buffer 318 is reused across multiple operations at the neural engines 314, and thereby reduce data transfer to and from system memory 230. Data buffer 318 may be operated in a broadcast mode where data input data of all input channels are fed to all neural engines 314 or in a unicast mode where data input data of a subset of input channels are fed to each neural engine 314.
The input data 322 stored in data buffer 318 may be part of, among others, image data, histogram of oriented gradients (HOG) data, audio data, meta data, output data 328 of a previous cycle of the neural engine 314, and other processed data received from other components of the SOC component 204. Further, input data 322 may refer to all of the data stored in data buffer 318 or one or more portions of the input data stored in data buffer 318.
Buffer DMA 320 includes a read circuit that receives a portion (e.g., tile) of the input data from a source (e.g., system memory 230) for storing in data buffer 318, and a write circuit that forwards data from data buffer 138 to a target (e.g., system memory).
Example Neural engine Architecture
Neural engine 314 may include, among other components, input buffer circuit 402, computation core 416, neural engine (NE) control 418, kernel extract circuit 432, accumulators 414 and output circuit 424. Neural engine 314 may include further components not illustrated in
Input buffer circuit 402 is a circuit that stores a portion of the input data 322 as it is received from the data buffer 318 and sends an appropriate portion 408 of input data for a current task or process loop to computation core 416 for processing. Input buffer circuit 402 includes a shifter 410 that shifts read locations of input buffer circuit 402 to change the portion 408 of input data sent to computation core 416. By changing portions of input data provided to the computation core 416 via shifting, neural engine 314 can perform multiply-accumulate for different portions of input data based on fewer number of read operations. In one or more embodiments, the input data 322 includes data of difference convolution groups and/or input channels.
Kernel extract circuit 432 is a circuit that receives kernel data 326 from kernel DMA 324 and extracts kernel coefficients 422. In one embodiment, the kernel extract circuit 432 references a look up table (LUT) and uses a mask to reconstruct a kernel from compressed kernel data 326. The mask indicates locations in the reconstructed kernel to be padded with zero and remaining locations to be filled with numbers. The kernel coefficients 422 of the reconstructed kernel are sent to computation core 416 to populate register in multiply-add (MAD) circuits of computation core 416. In other embodiments, the kernel extract circuit 432 receives kernel data in an uncompressed format and the kernel coefficients are determined without referencing a LUT or using a mask.
Computation core 416 is a programmable circuit that performs computation operations. For this purpose, the computation core 416 may include MAD circuits MAD0 through MADN and a post-processor 428. Each of MAD circuits MAD0 through MADN may store an input value in the portion 408 of the input data and a corresponding kernel coefficient in the kernel coefficients 422. The input value and the corresponding kernel coefficient are multiplied in each of MAD circuits to generate a processed value 412.
Accumulator 414 is a memory circuit that receives and stores processed values 412 from MAD circuits. The processed values stored in accumulator 414 may be sent back as feedback information 419 for further multiply and add operations at MAD circuits or sent to post-processor 428 for post-processing. Accumulator 414 in combination with MAD circuits form a multiply-accumulator (MAC) 404. In one or more embodiments, accumulator 414 may have subunits where each subunit sends data to different components of neural engine 314. For example, during a processing cycle, data stored in a first subunit of accumulator 414 is sent to MAC circuit while data stored in a second subunit of accumulator 414 is sent to post-processor 428.
Post-processor 428 is a circuit that performs further processing of values 412 received from accumulator 414. The post-processor 428 may perform operations including, but not limited to, applying linear functions (e.g., Rectified Linear Unit (ReLU)), normalized cross-correlation (NCC), merging the results of performing neural operations on 8-bit data into 16-bit data, and local response normalization (LRN). The result of such operations is output from the post-processor 428 as processed values 417to output circuit 424.
NE control 418 controls operations of other components of the neural engine 314 based on the operation modes and parameters of neural processor circuit 218. Depending on different modes of operation (e.g., group convolution mode or non-group convolution mode) or parameters (e.g., the number of input channels and the number of output channels), neural engine 314 may operate on different input data in different sequences, return different values from accumulator 414 to MAD circuits, and perform different types of post-processing operations at post processor 428. To configure components of the neural engine 314 to operate in a desired manner, the NE control 418 sends control signal to components of the neural engine. NE control 418 may also include rasterizer 430 that tracks the current task or process loop being processed at neural engine 314, as described below in detail with reference to
Output circuit 424 receives processed values 417 from the post-processor 428 and interfaces with data buffer 318 to store processed values 417 in data buffer 318. For this purpose, output circuit 424 may send out output data 328 in a sequence or a format that is different from the sequence or format in which the processed values 417 are processed in post-processor 428.
The components in the neural engine 314 may be configured during a configuration period by the NE control 418 and the neural task manager 310. For this purpose, the neural task manager 310 sends configuration information to the neural engine 314 during the configuration period. The configurable parameters and modes may include, but are not limited to, mapping between input data elements and kernel elements, the number of input channels, the number of output channels, performing of output strides, and enabling/selection of post-processing operations at the post processor 428.
Input data is typically split into smaller pieces of data for parallel processing at multiple neural engines 314. Often multiple cycles of operations are performed to generate output for a task associated with a neural network. A compiler executed by CPU 208 analyzes the hierarchy and nodes of the neural network and determines how the input data is to be segmented based on the hardware constraints of the neural processor circuit 218. One of functions of the compiler is to determine how input data is to be split into smaller data units for processing at the neural engines 314, and how the processing is to be iterated in loops to produce the result for tasks.
In the loop for each convolution group is a processing loop for a slice of the input data. The entire input data for a convolution operation is segmented into multiple strips of slices in an overlapping manner, as shown in
For each work unit, an internal processing loop may be provided for an output channel group (OCG). The number of output channels produced for a given work unit by a single cycle of the computation core 416 is referred to as an OCG. Depending on operation modes, each neural engine 314 may process output data of different numbers of output channels (e.g., 8 channels, 32 channels) for a single load of input data into its input buffer circuit 402.
For each output channel group, an internal processing loop may be provided for an input channel (Cin). If an input stride is implemented to skip certain input data, loops for sub-input channels (Sub-Cin) may be provided within the processing loop for the input channel (Cin).
For each input channel or each sub-input channel, internal loops are provided for processing horizontal spatial support for a kernel and the vertical support within each horizontal spatial support. The spatial support refers to the input data for convolution with the kernel, and includes overfetched input data for performing convolution at the edges of the input data.
Overfetch refers to fetching additional input data in current slice, tile or work unit so that proper dimension of input data can be provided for convolution with a kernel. In one or more embodiments, overfetch is performed vertically between slices to obtain additional rows of input data (shown as overlapping portions 602, 604, 606 in
For each spatial support for the kernel, an internal processing loop for an output channel (OC) is provided to generate output data for each output channel (Cout). In cases where output stride implements a spatial upsampling, an additional inner loop for processing each sub-output channel is provided. Loading of kernel coefficients and MAC operations are performed within the loop for the output channel (OC) or sub-output channel if an output stride is implemented, to generate output data for the output channel (OC) or sub-output channel.
The nested loop structure of
In one or more embodiments, the operations associated dividing the input space into smaller units and processing these smaller units as described above with reference to
By providing rasterizers in different components of neural processor circuit 218, overhead in data transmitted between the components of the neural processor circuit 218 may be reduced. If a single central rasterizer is provided to control different components of the neural processor circuit 218, kernel data, input data, and output data transmitted between the components may be needed in these data to identify associated position in the loops of the task such as convolution group, tile, slice, work unit, input channel and output channel. By using distributed rasterizers, no separate metadata is needed to transmit the kernel data, input data and output data among components of the neural processor circuit 218.
Example Process at Neural engine Architecture
Rasterizer 718 in data buffer 318 then instructs 808 data buffer 318 to send a work unit to one or more neural engines 314. The work unit is then stored in input buffer circuits 402 of the one or more neural engines 314.
In one or more embodiments, input buffer circuit 402 selects 816 a portion of work unit to be sent to MAC 404 to perform multiply-accumulate operation. Then MAC 404 performs 820 multiply-accumulate operations on the selected portion of the work unit using a corresponding kernel. Then it is determined 824 if the entire work unit is processed at one or more neural engines 314. If not, the selected portion of the work unit is shifted by shifter 410 and returns to perform 820 another round of multiply-accumulate operations.
If it is determined 824 that the entire work unit was processed, then it proceeds to determine 832 if all work units in the tile was processed. If not, then the process proceeds 836 to the next work unit by having data buffer 318 send 808 a next work unit to one or more neural engines 314, and repeats the subsequent processes.
If it is determined 832 that all work units in the tile was processed by the neural engines 314, the process proceeds to determine 840 whether all tiles for the input data were processed. If not, the process proceeds 844 to a next tile by having rasterizer 720 instructs 804 buffer DMA 320 to receive a next tile from system memory 230 and repeats the subsequent processes.
If it is determined 840 that all tiles of the input data are processed, then the process ends for the current input data. Then, the process may repeated to process the next input data or proceed to the next task.
Embodiments of the process as described above with reference to
Convolution operations with input stride can be used to reduce spatial dimensions of input data 322 by skipping samples of input data 322 in horizontal and/or vertical directions. The input stride allows a convolution operation to skip samples of input data 322, reducing a resolution of output data 328 compared to input data 322.
The 5×5 shaped kernel data 326 (kernel data 924) may be decomposed offline (e.g., by compiler) into sub-kernels 934, 936, 938, 940 of smaller spatial support than that of the original 5×5 kernel data 326. The swizzled 5×5 shaped kernel data 326 may be stored in kernel extract circuit 432 in the post-swizzled order, e.g., kernel coefficients for sub-channel 0 associated with sub-kernel 934 are stored first in kernel extract circuit 432, followed by kernel coefficients for sub-channel 1 associated with sub-kernel 936, etc. The at least one neural engine 314 convolves each sub-channel 926, 928, 930, 932 and kernel coefficients from a corresponding sub-kernel 934, 936, 938, 940, as shown in
As discussed, the input stride allows convolution to skip samples of portion of input data 322, thereby reducing the resolution of output data 328 compared to that of portion of input data 322. Kernel data 326 may be applied at the input resolution, and the effect would be as if kernel data 326 was applied to all samples of portion of input data 322, and then output data 328 was subsampled. If convolution with input stride reduction was implemented in this manner, the intermediate result would be wasted and 2×2 input stride would only utilize approximately 25% of MAC resources at each neural engine 314, e.g., MAD circuits MAD0 through MADN in MAC 404. Instead of running convolution at the input resolution and discarding convolution results to obtain the output resolution, the at least one neural engine 314 may perform convolution at the output resolution, by overfetching portion of input data 322 and performing a sub-channel swizzle to convert additional information into sub-channels.
The algorithm for performing sub-channel swizzle is illustrated in
Each neural engine 314 may be configured to stride through kernel data 326 stored in kernel extract circuit 432 stepping by, e.g., sub-sampling factors Sx and Sy, starting with a phase offset of a sub-channel of portion 408 of input data. In the illustrative embodiment shown in
Data buffer 318 may overfetch portion of input data 322 from system memory 230 with phased overfetch. For example, a 16×16 portion of input data 322 for convolution with 1×5 shaped kernel data 326 with 1×2 stride (i.e., Sx=1 and Sy=2) can be overfetched by data buffer 318 as a 16×35 portion of input data 322 and de-interleaved into a first sub-channel of a 16×18 portion of input data 322 and a second sub-channel of a 16×17 portion of input data 322. The first sub-channel portion of input data 322 may be convolved with 1×3 shaped kernel data 326, and the second sub-channel portion of input data 322 may be convolved with 1×2 shaped kernel data 326.
When kernel data 326 is odd-sized and a convolution with input stride of two is performed, individual sub-channels of portion of input data 322 and sub-kernels extracted from kernel extract circuit 432 would have different sizes, which is not suitable for efficient execution of convolution at neural engine 314. In this case, kernel data 326 stored into kernel extract circuit 402 are padded with zeros to obtain kernel data 326 having spatial shape of multiple of two in both spatial dimensions. For example, 5×5 shaped kernel data 326 for convolution with 2×2 input stride can be zero-padded to become 6×6 shaped kernel data 326 stored in kernel extract circuit 432. In an embodiment, NE control 418 may instruct zero-padding on 5×5 shaped kernel data 326 and configure neural engine 314 accordingly. If kernel data 326 uses compression (e.g., when being stored in kernel extract circuit 432), the padded zeros may be skipped by neural engine 314 with limited performance loss.
To perform convolution with input stride reduction, at least one of neural engines 314 is configured to receive multiple sub-channels of portion of input data 322 from data buffer 318. Each sub-channel may be stored into input buffer circuit 402 and provided to MAC 404 as portion 408 of input data. Kernel data 326 received from kernel DMA 324 (i.e., kernel fetcher circuit) and stored at kernel extract circuit 432 may correspond to a subsampled sub-kernel for each sub-channel of portion of input data 322. Neural engine 314 performs a convolution operation on each sub-channel of portion 408 of input data 322 and the corresponding sub-kernel (i.e., corresponding kernel coefficients 422 extracted from kernel extract circuit 432). Neural engine 314 accumulates, by accumulators 414, corresponding processed values 412 of each sub-channel portion of the convolution operation to generate a single channel of processed values 412 and a single channel of output data 328 for storage into data buffer 318. Data buffer 318 may de-interleave a channel of portion of input data 322 into the sub-channels of portion of input data 322 for broadcasting to input buffer circuit 402. Neural engine 314 may receive the sub-channels of portion of input data 322 from data buffer 318 over multiple processing cycles. In an embodiment, kernel data 326 stored into kernel extract circuit 432 includes padded zeros, and a spatial size of kernel data 326 with padded zeros is a multiple of two in each spatial dimension of kernel data 326.
The inverse operation to the input stride reduction is output stride expansion. The output stride expansion can be used when running an input-strided convolution or average-pooled network layer backwards with a transposed kernel, which can be referred to herein as ‘transposed convolution’ representing an inverse operation of convolution. Transposed convolution can be utilized in the backpropagation pass of CNN when a convolution layer is trained. Transposed convolution can be useful in various applications, e.g., image style transfer, per-pixel image segmentation, etc.
Data buffer 318 and neural engines 314 support transpose convolution with output stride expansion by producing multiple sub-channels of processed values 412 from each input channel of portion of input data 322. The sub-channels of processed values 412 may be generated in different accumulators 414 in the at least one neural engine 314 using sub-kernels 1116, 1118, 1120, 1122 stored as subsampled kernel data 326 in kernel extract circuit 432. For example, in the case of 5×5 shaped kernel 1114 with output stride of two in both spatial dimensions, portion 1112 of input data 322 may be convolved with sub-kernels 1116, 1118, 1120, 1122 of size 3×3, 3×2, 2×3 and 2×2 into four different accumulators 414. At the end of convolution, four sub-channels of processed values 412 may be post-processed in post-processor 428, stored in output circuit 424 and written back to data buffer 318 as output data 328. Data buffer 318 may interleave the four sub-channels of output data 328 to produce output data 328 that is four times as large as portion 1112 of input data 322.
Similarly to the input stride mode, when using output stride expansion, kernel data 326 may be padded with zeros to obtain zero-padded kernel data 326 having an even value for both spatial dimensions. For example, 5×5 shaped kernel data 326 used for 2×2 output stride expansion requires zero-padding into 6×6 shaped kernel data 326. The sparseness feature can efficiently skip the padded zeros when compression is enabled.
To perform convolution with output stride expansion, the at least one neural engine 314 can be configured to receive one or more channels of portion of input data 322 from data buffer 318. Each channel of portion of input data 322 may be stored in input buffer circuit 402 and provided to MAC 404 for convolution as portion 408 of input data. The at least one neural engine 314 may further receive kernel data 326 at kernel extract circuit 432 from kernel DMA 324 (kernel fetcher circuit), and split the received kernel data 326 into multiple sub-kernels. The sub-kernels can be extracted from kernel extract circuit 432 as kernel coefficients 422 and provided to MAC 404 for convolution. The at least one neural engine 314 may perform a convolution operation on portion 408 of input data and the corresponding kernel coefficients 422 (sub-kernels) to generate multiple sub-channels of processed values 412 for each channel of portion 408 of input data 322.
The multiple sub-channels of processed values 412 generated for each channel of portion of input data 322 may be post-processed in post-processor 428, stored as processed values 417 in output circuit 424, and output as output data 328 for storage into data buffer 318. Each sub-channel of processed values 412 may be generated using a different accumulator 414 in the at least one neural engine 314. The data buffer 318 may interleave the sub-channels of output data 328 to produce a channel of output data 328 having a spatial size in accordance with a spatial size of the received kernel data 326. Note that the spatial size of output data 328 is larger than that of the received channel of portion of input data 322. Note also that two or more of the sub-kernels generated by splitting the received kernel data 326 may comprise padded zeros across at least one dimension of the two or more sub-kernels.
Enabling both input stride reduction and output stride expansion provides support for effectively doubling a size of kernel. Note that, for both input stride reduction and output stride expansion, kernel data 326 received at kernel extract circuit 432 is being subsampled. Because of the kernel subsampling, a maximum kernel range is effectively doubled, which allows for utilizing kernels of sizes effectively doubled for both convolution and transposed convolution.
For example, to perform a convolution with 29×29 shaped kernel data 326 and obtain 16×16 shaped output data 328, at least one of the neural engines 314 (e.g., via NE control 418) may pad zeros to kernel data 326 to effectively generate 30×30 shaped kernel data 326. Data buffer 318 may fetch 32×32 shaped portion of input data 322 from system memory 230 as overfetched 60×60 shaped portion of input data 322. Data buffer 318 may broadcast 60×60 shaped portion of input data 322 to neural processor circuit 218 as four sub-channels of 16×16 shaped portion of input data 322 overfetched to 30×30 shaped portion of input data 322. The at least one neural engine 314 may receive each 30×30 shaped portion of input data 322 (i.e., each source sub-channel) from data buffer 318 over multiple processing cycles as a sub-channel of portion of input data 322.
Inside neural engine 314, the 30×30 shaped kernel data 326 may be subsampled into 15×15 shaped kernels. Each 15×15 shaped kernel may be applied as corresponding kernel coefficients 422 to their respective sub-channels of portion of input data 322 (portions 408 of input data) and accumulated into their own accumulator 414. The four resulting 16×16 shaped sub-channels of processed values 412 may be post-processed and output as four sub-channels of output data 328 to data buffer 318. Data buffer 318 may re-interleave the four 16×16 shaped sub-channels into 32×32 shaped result. Note that odd-sized kernels (e.g., 15×15 shaped kernels) may be zero-padded (e.g., via NE control 418) to obtain even-sized kernels for extraction from kernel extract circuit 432 as kernel coefficients 432.
When operating in large kernel mode, kernel data 326 is replicated. For 1-dimensional (1D) shaped data, half-sized kernel data 326 may be used for each pair of input and output sub-channels. For example, even-even and odd-odd pairs of input-output sub-channels may use the same sub sampled kernel including even coefficients 422 of original kernel data 326. The even-odd and odd-even pairs of input-output sub-channels may use spatially shifted kernels including odd coefficients 422 of original kernel data 326. In the case of spatial width of original kernel data 326 being odd, zero-padding may be applied on the left side for the odd-even sub-kernel and on the right side for the even-odd sub-kernel. For example, kernel data 326 having spatial width of five with coefficients [C0, C1, C2, C3, C4] may use coefficients [C0, C2, C4] for the even-even and odd-odd sub-kernels, coefficients [0, C1, C3] for the even-odd sub-kernel, and coefficients [C1, C3, 0] for the odd-even sub-kernel. The 2-dimensional (2D) versions of sub-kernels are direct extension of 1D version, producing 16 quarter-sized sub-kernels. Four of these 16 quarter-sized sub-kernels comprise even kernel coefficients 422 and the other 12 sub-kernels comprise odd kernel coefficients 422 padded on different sides by zeros.
To perform convolution in accordance with the large kernel mode, the at least one neural engine 314 is configured to receive multiple sub-channels of portion of input data 322 from data buffer 318. Each sub-channel of portion of input data 322 may be stored in input buffer circuit 402 and provided to MAC 404 for convolution as a sub-channel of portion 408 of input data. Neural engine 314 further generates multiple sub-kernels using a kernel received from the kernel DMA 324 at kernel extract circuit 432. The generated sub-kernels are extracted as corresponding kernel coefficients 422 and provided to MAC 404. The at least one neural engine 314 may perform convolution on each sub-channel of portion 408 of input data 322 and kernel coefficients 422 of corresponding sub-kernels to generate multiple sub-channels of processed values 412 for each sub-channel of portion 408 of input data. For each sub-channel of portion 408 of input data, the sub-channels of processed values 412 are post-processed in post-processor 428, stored in output circuit 424 as processed values 417, and output as output data 328 for storage in data buffer 318. Data buffer 318 may interleave the sub-channels of output data 328 for each sub-channel of portion 408 of input data to produce a channel of output data 328 stored in data buffer 318. Each sub-channel of processed values 412 may be generated using a different accumulator 414 in the at least one neural engine 314. Furthermore, the sub-kernels stored in kernel extract circuit 432 may comprise a subset of repeated sub-kernels, and two or more of the sub-kernels may comprise padded zeros across at least one dimension.
For sizes of portions of input data 322 being less than a full work unit, the neural engines 314 are unable to utilize a high level of MAC capacity. CNN may use small patches of input data 322 in their terminal segments, e.g., patches of input data 322 having 8×8 spatial size. The patches of input data 322 of spatial size 8×8 would cause neural engines 314 to operate at approximately 25% efficiency. In order to increase utilization efficiency, neural engines 314 presented herein support a work unit having a spatial size of 8×8, which pairs up four MAD circuits and accumulators 414 in MAC 404 to produce four simultaneous output channels of processed values 412 and output data 328. The spatial size of work unit may be the same for both 8-bit integer precision and 16-bit floating point precision. Thus, at least one of the neural engines 314 may produce four 8×8 channels of output data 328 using 8×8 patch of input data 322.
In regular operational mode, computation core 416 in the neural engine 314 may process up to eight channels per broadcast from data buffer 318, which depends on a number of output accumulators 414 per each computation core 416. Each computation core 416 may also produce eight output channels per each broadcast of portion of input data 322 from data buffer 318. In contrast, in the small source mode (e.g., when portion of input data 322 is broadcast as 8×8 patch of input data), each computation core 416 may process four small output channels by feeding computation core 416 with four different kernel coefficients 422 for convolution with up to 64 input values of portion 408 of input data. Therefore, for the small source mode and 8-bit integer precision or 16-bit floating point precision, each computation core 416 may support up to 32 output channels of output data 328 per broadcast of portion of input data 322 from data buffer 318. A restriction on a spatial size of kernel data 326 received at the kernel extract circuit 432 may be imposed, e.g., the spatial width of kernel data 326 may be less than or equal to eight. It should be also noted that accumulators 414 may be configured to perform accumulation operations on 32-bit integer operands, i.e., accumulated processed values 412 may be 32-bit integers.
The small source mode can be utilized when the entire output surface of the output data 328 is of shape 8×8 or smaller, which cannot be used as a normal shape size during regular convolution of larger images. Also, the small source mode consumes four times as many kernel coefficients 422 per processing cycle, which increases a kernel bandwidth. While the aforementioned 16-bit floating point mode for portion of input data 322 similarly increases kernel bandwidth (e.g., by two times, because two channels of output data 328 may be generated per processing cycle), the 16-bit floating point mode of portion of input data 322 may reuse kernel coefficients 422 for every work unit of a large output surface of output data 328, reducing average kernel bandwidth. By necessity, 8×8 patches of portion of input data 322 will incur no re-use (i.e., 8×8 patches are single work unit surfaces), making the sustained kernel bandwidth very large for a series of small-patch layers of input data 322.
For performing convolution on small sources of portion of input data 322, neural engine 314 is configured to receive one or more patches of portion of input data 322 from data buffer 318 over a processing cycle. Neural engine 314 may further receive, from kernel DMA 324 at kernel extract circuit 432 during the processing cycle, kernel data 326 having multiple kernels. Neural engine 314 may perform convolution operations on each of the one or more patches of portion of input data 322 and the kernels extracted from kernel extract circuit 432 as corresponding kernel coefficients, producing multiple output channels of output data 328. Neural engine 314 may perform multiply-accumulate operations on one of the one or more patches of portion of input data 322 and two or more of the kernels producing multiple output channels of processed values 412 in accumulators 414.
Rasterizer 718 in data buffer 318 then instructs 1306 data buffer 318 to send multiple sub-channels of the portion of the input data to at least one of the neural engines 314. The work unit of input data (e.g., at least a portion of sub-channel of input data) is then stored in input buffer circuit 402 of the at least one neural engine 314. Rasterizer 718 in data buffer 318 instructs data buffer 318 to de-interleave a channel of the portion of the input data into the sub-channels of the portion of the input data. Rasterizer 718 in data buffer 318 instructs data buffer 318 to send to the at last one neural engine 314 the sub-channels of the portion of the input data over multiple processing cycles.
Rasterizer 722 in kernel DMA 324 (kernel fetcher circuit) then instructs 1308 kernel DMA 324 to receive one or more kernels from system memory 230. Rasterizer 722 in kernel DMA 324 then instructs 1310 kernel DMA 324 to send a kernel of the one or more kernels to the at least one neural engine circuit 314. The kernel may be stored as kernel data 326 in the kernel extract circuit 432. Kernel data 326 may be decomposed (e.g., offline, prior to reception by neural engine 314 at kernel extract circuit 432) into a corresponding sub-kernel for each sub-channel of the portion of the input data.
The at least one neural engine 314 then performs 1314 a convolution operation on each sub-channel of the portion of the input data and the corresponding sub-kernel. The at least one neural engine 314 then accumulates 1316 corresponding outputs of each sub-channel portion of the convolution operation to generate a single channel of output data 328.
Rasterizer 718 in data buffer 318 may instruct data buffer 318 to send one or more channels of the portion of the input data to the at least one neural engine 314. A work unit of the one or more channels may be stored in input buffer circuits 402 of the one or more neural engines 314. Rasterizer 722 in kernel DMA 324 may also instruct kernel DMA 324 to send another kernel of the one or more kernels to the at least one neural engine 314. The other kernel may be stored as kernel data 326 in kernel extract circuit 432. Kernel data 326 may be decomposed (e.g., offline, prior to reception by neural engine 314 at kernel extract circuit 432) into multiple sub-kernels for extraction in corresponding sub-channel order as kernel coefficients 422. The at least one neural engine 314 performs another convolution operation on the one or more channels of the portion of input data and the sub-kernels to generate multiple sub-channel output data 328 for each channel of the portion of the input data. The sub-channel output data 328 for each channel of the portion of the input data are stored in data buffer 318. Rasterizer 718 in data buffer 318 may instruct data buffer 318 to interleave the sub-channel outputs for each channel of the portion of the input data to produce a channel output having a size in accordance with a size of the other kernel.
Rasterizer 718 in data buffer 318 may instruct data buffer 318 to send another set of sub-channels of the portion of the input data to the at least one neural engine 314. A work unit of the sub-channels may be stored in input buffer circuits 402 of the one or more neural engines 314. Rasterizer 722 in kernel DMA 324 may also instruct kernel DMA 324 to send another kernel of the one or more kernels to the at least one neural engine 314. The other kernel may be stored as kernel data 326 in kernel extract circuit 432. Kernel data 326 may be decomposed (e.g., offline, prior to reception by neural engine 314 at kernel extract circuit 432) into multiple sub-kernels for extraction in corresponding sub-channel order as kernel coefficients 422. The at least one neural engine 314 performs another convolution operation on each sub-channel of the other set of sub-channels of the portion of the input data and the sub-kernels to generate multiple sub-channel output data 328 for each sub-channel of the portion of the input data. The sub-channel output data 328 for each sub-channel of the portion of the input data are stored in data buffer 318. Rasterizer 718 in data buffer 318 may instruct data buffer 318 to interleave the sub-channel outputs for each sub-channel of the portion of the input data to produce output data.
Rasterizer 718 in data buffer 318 may instruct data buffer 318 to send one or more patches of the portion of the input data to the at least one neural engine 314 over a processing cycle. The one or more patches of the portion of the input data may be stored in input buffer circuits 402 of the one or more neural engines 314. Rasterizer 722 in kernel DMA 324 may also instruct kernel DMA 324 to send multiple kernels to the least one neural engine 314 over the processing cycle. The kernels may be stored as kernel data 326 in the kernel extract circuit 432. The at least one neural engine 314 performs convolution operations on each of the one or more patches of the portion of the input data and the kernels to produce multiple output channels of output data 328.
Embodiments of the process as described above with reference to
While particular embodiments and applications have been illustrated and described, it is to be understood that the invention is not limited to the precise construction and components disclosed herein and that various modifications, changes and variations which will be apparent to those skilled in the art may be made in the arrangement, operation and details of the method and apparatus disclosed herein without departing from the spirit and scope of the present disclosure.