Computing systems have made significant contributions toward the advancement of modern society and are utilized in a number of applications to achieve advantageous results. Applications such as artificial intelligence, machine learning, big data analytics and the like perform computations on large amounts of data. In conventional computing systems, data is transferred from memory to one or more processing units, the processing units perform calculations on the data, and the results are then transferred back to memory. The transfer of large amounts of data from memory to the processing unit and back to memory takes time and consumes power. Accordingly, there is a continuing need for improved computing systems that reduce processing latency, data latency and or power consumption.
The present technology may best be understood by referring to the following description and accompanying drawings that are used to illustrate embodiments of the present technology directed toward memory processing architectures.
In one embodiment, a memory processing unit (MPU) can include a first memory and a plurality of processing regions. The first memory can include a plurality of memory regions, wherein the plurality of memory regions can be configured in a corresponding pluralities of memory blocks. The plurality of processing regions can be interleaved between the plurality of regions of the first memory, wherein the processing regions include a plurality of core groups, and the core groups include one or more compute cores. One or more of the plurality of core groups of respective ones of the plurality of processing regions can be coupled between adjacent memory blocks of adjacent memory regions of the first memory.
In another embodiment, a memory processing unit (MPU) can include a first memory and a plurality of processing regions interleaved between a plurality of regions of the first memory. The plurality of memory regions can be configured in corresponding pluralities of memory blocks and the plurality of processing regions can be configure in corresponding pluralities of core groups. The plurality of core groups of respective ones of the plurality of processing regions can be coupled between adjacent ones of the plurality of memory regions of the first memory. Each given core group can be coupled to a set of directly adjacent memory blocks, while not coupled to the other memory blocks of the adjacent memory regions.
In yet another embodiment, a memory processing unit (MPU) can include a first memory and a plurality of processing regions. The first memory can include a plurality of memory regions. One or more of the plurality of memory regions can be configured in a corresponding plurality of memory blocks. The memory blocks can be configured to store weights in a selected first bit precision or second bit precision. The processing regions can include a plurality of core groups, wherein the core groups include one or more compute cores.
In yet another embodiment, a memory processing method can include configuring a first memory to store weights in a select first bit precision or second bit precision, wherein the first memory includes a plurality of regions. Data flow can be configured between compute cores of one or more of a plurality of processing regions and corresponding adjacent ones of the plurality of regions of the first memory. Data flow can also be configured between a second memory and the compute cores of the one or more of the plurality of processing regions. Data flow can also be configured between compute cores within respective ones of the one or more of the plurality of processing regions. One or more sets of compute cores of one or more of the plurality of processing regions can be configured to perform respective compute functions of a neural network model. Weights in the selected first bit precision or second bit precision for the neural network model can be loaded into the second memory. Activation data for the neural network model can be loaded into one or more of the plurality of regions of the first memory. Data movement can be synchronized between one or more compute cores producing given data and one or more other compute cores consuming the given data based on the neural network model.
In yet another embodiment, a memory processing unit (MPU) can include a first memory including a plurality of memory regions. The plurality of memory regions can be configured to store weights of a weight matrix in a selected first bit precision or second bit precision. The MPU can also include a plurality of processing regions interleaved between the plurality of regions of the first memory. The processing regions can include a plurality of computer cores, wherein the plurality of compute cores are coupled between adjacent ones of the plurality of memory regions. The compute cores can include multi-accumulator multiply-and-accumulate (MAC) unit configured to computer matrix dot products between the weight matrix and a feature map matrix. Element values of the weight matrix can be reused by time division multiplexing computations of the matrix dot product.
In yet another embodiment, a memory processing unit (MPU) can include a first memory and a plurality of processing regions. The first memory can include a plurality of memory regions, wherein the plurality of memory regions can be configured in a corresponding pluralities of memory blocks. The plurality of processing regions can be interleaved between the plurality of regions of the first memory, wherein the processing regions include a plurality of core groups, and the core groups include one or more compute cores. One or more of the plurality of core groups of respective ones of the plurality of processing regions can be coupled between adjacent memory blocks of adjacent memory regions of the first memory. The design of the memory processing unit (MPU) can include considering the impact of a number of parameter and a number of layers of a number of different neural network models on memory balance. The design can also consider the impact of core utilization and memory reuse on bandwidth balance. The design can also consider one or more design points. The design can also consider one or more supported operations. The design can also consider one or more target models. The design can also consider supported memory organizations.
This Summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This Summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used to limit the scope of the claimed subject matter.
Embodiments of the present technology are illustrated by way of example and not by way of limitation, in the figures of the accompanying drawings and in which like reference numerals refer to similar elements and in which:
Reference will now be made in detail to the embodiments of the present technology, examples of which are illustrated in the accompanying drawings. While the present technology will be described in conjunction with these embodiments, it will be understood that they are not intended to limit the technology to these embodiments. On the contrary, the invention is intended to cover alternatives, modifications and equivalents, which may be included within the scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present technology, numerous specific details are set forth in order to provide a thorough understanding of the present technology. However, it is understood that the present technology may be practiced without these specific details. In other instances, well-known methods, procedures, components, and circuits have not been described in detail as not to unnecessarily obscure aspects of the present technology.
Some embodiments of the present technology which follow are presented in terms of routines, modules, logic blocks, and other symbolic representations of operations on data within one or more electronic devices. The descriptions and representations are the means used by those skilled in the art to most effectively convey the substance of their work to others skilled in the art. A routine, module, logic block and/or the like, is herein, and generally, conceived to be a self-consistent sequence of processes or instructions leading to a desired result. The processes are those including physical manipulations of physical quantities. Usually, though not necessarily, these physical manipulations take the form of electric or magnetic signals capable of being stored, transferred, compared and otherwise manipulated in an electronic device. For reasons of convenience, and with reference to common usage, these signals are referred to as data, bits, values, elements, symbols, characters, terms, numbers, strings, and/or the like with reference to embodiments of the present technology.
It should be borne in mind, however, that these terms are to be interpreted as referencing physical manipulations and quantities and are merely convenient labels and are to be interpreted further in view of terms commonly used in the art. Unless specifically stated otherwise as apparent from the following discussion, it is understood that through discussions of the present technology, discussions utilizing the terms such as “receiving,” and/or the like, refer to the actions and processes of an electronic device such as an electronic computing device that manipulates and transforms data. The data is represented as physical (e.g., electronic) quantities within the electronic device's logic circuits, registers, memories and/or the like, and is transformed into other data similarly represented as physical quantities within the electronic device.
In this application, the use of the disjunctive is intended to include the conjunctive. The use of definite or indefinite articles is not intended to indicate cardinality. In particular, a reference to “the” object or “a” object is intended to denote also one of a possible plurality of such objects. The use of the terms “comprises,” “comprising,” “includes,” “including” and the like specify the presence of stated elements, but do not preclude the presence or addition of one or more other elements and or groups thereof. It is also to be understood that although the terms first, second, etc. may be used herein to describe various elements, such elements should not be limited by these terms. These terms are used herein to distinguish one element from another. For example, a first element could be termed a second element, and similarly a second element could be termed a first element, without departing from the scope of embodiments. It is also to be understood that when an element is referred to as being “coupled” to another element, it may be directly or indirectly connected to the other element, or an intervening element may be present. In contrast, when an element is referred to as being “directly connected” to another element, there are not intervening elements present. It is also to be understood that the term “and or” includes any and all combinations of one or more of the associated elements. It is also to be understood that the phraseology and terminology used herein is for the purpose of description and should not be regarded as limiting.
Referring now to
One or more of the plurality of processing regions 135-150 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segments of one or more computation functions, or the like. For example, a first processing region 135 can be configured to perform two computation functions, and a second processing region 140 can be configured to perform a third computation function. In another example, the first processing region 135 can be configured to perform three instances of a first computation function, and the second processing region 140 can be configured to perform a second and third computation function. In yet another example, a given computation function can have a size larger than the predetermined size of the one or more processing regions. In such case, the given computation function can be segmented, and the computation function can be configured to be performed on one or more of the plurality of processing units 135-150. The processing regions 135-150 can each include one or more memory processing units, memory processing unit cores, or the like. The memory processing units and or cores can implement computation functions in arrays of memory cells without changing the basic memory array structure. The one or more centralized or distributed control circuitry 160 can configure the one or more computation functions of the one or more of the plurality of processing regions 135-150. The computation functions can include, but are not limited to, vector products, matrix-dot-products, convolutions, min/max pooling, averaging, scaling, and or the like.
A central data flow direction can be utilized with the plurality of memory regions 110-130 and plurality of processing regions 135-150. The one or more centralized or distributed control circuitry 160 can control data flow into each given one of the plurality of processing regions 135-150 from a first adjacent one of the plurality of memory regions 110-130 to a second adjacent one of the plurality of memory regions 110-130. For example, the one or more control circuitry 160 can configure data to flow into a first processing region 135 from a first memory region 110 and out to a second memory region 115. Similarly, the one or more control circuitry 160 can configure data to flow into a second processing region 140 from the second memory region 115 and out to a third memory region 120. The control circuitry 160 can include a centralized control circuitry, distributed control circuitry or a combination thereof. If distributed, the control circuitry 160 can be local to the plurality of memory regions 110-130, the plurality of processing regions 135-150, and or one or more communication links 155.
In one implementation, the plurality of memory regions 110-130 and the plurality of processing regions 135-150 can be columnal interleaved with each other. The data can be configured by the one or more centralized or distributed control circuitry 160 to flow between adjacent columnal interleaved processing regions 135-150 and memory regions 110-130 in a cross-columnal direction. In one implementation, the data can flow in a unidirectional cross-columnal direction between adjacent processing regions 135-150 and memory regions 110-130. For example, data can be configured to flow from a first memory region 110 into a first processing region 135, from the first processing region 135 out to a second memory region 115, from the second memory region 115 into a second processing region 140, and so on. In another implementation, the data can flow in a bidirectional cross-columnal direction between adjacent processing regions 135-150 and memory regions 110-130. In addition or alternatively, data within respective ones of the processing region 135-150 can flow between functions within the same processing region. For example, for a first processing region 135 configured to perform two computation functions, data can flow from the first computation function directly to the second computation function without being written or read from an adjacent memory region.
The one or more communication links 155 can be coupled between the interleaved plurality of memory region 110-130 and plurality of processing regions 135-150. The one or more communication links 155 can be configured for moving data between non-adjacent ones of the plurality of memory regions 110-130, between non-adjacent ones of the plurality of processing regions 135-150, or between non-adjacent ones of a given memory region and a given processing region. For example, the one or more communication links 155 can be configured for moving data between the second memory region 115 and a fourth memory region 125. In another example, the one or more communication links 155 can be configured for moving data between the first processing region 135 and a third processing region 145. In another example, the one or more communication links 155 can be configured for moving data between the second memory region 115 and the third processing region 145, or between the second processing unit 140 and a fourth memory region 125.
Generally, the plurality of memory regions 110-130 and the plurality of processing regions 135-150 are configured such that partial sums move in a given direction through a given processing region. In addition, the plurality of memory regions 110-130 and the plurality of processing regions 135-150 are generally configured such that edge outputs move in a given direction from a given processing region to an adjacent memory region. The terms partial sums and edge outputs are used herein to refer to the results of a given computation function or a segment of a computation function. The computation functions of the plurality of processing regions 135-150 and the dataflow between the plurality of processing regions 135-150 and the memory regions 110-130 can be conceptualized as a plurality of produces and consumers. Computation functions of a given processing region can consume data form a given memory region and produce output data to a next memory region. The output data stored in the given memory region can then be consumed by computation functions of a next given processing region. Accordingly, producers and consumers communicate through shared memory regions 110-130. The computation functions and dataflow between adjacent processing regions 135-150 and memory regions 110-130 can be mapped to ensure adjacency requirements are met. The shared data can therefore be synchronized in a dataflow manner without a global centralized control unit.
Referring to
The memory processing unit 200 can also include a second memory 218. The second memory 218 can be coupled to the plurality of processing regions 212-216. The second memory 218 can optionally be logically or physically organized into a plurality of regions. The plurality of regions of the second memory 218 can be associated with corresponding ones of the plurality of processing region 212-216. In addition, the plurality of regions of the second memory 218 can include a plurality of blocks organized in one or more macros. The second memory can be non-volatile memory, such as resistive random-access memory (RRAM), magnetic random-access memory (MRAM), flash memory (FLASH) or the like. The second memory can alternatively be volatile memory.
One or more of the plurality of processing regions 212-216 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segments of one or more computation functions, or the like. For example, a first processing region 212 can be configured to perform two computation functions, and a second processing region 214 can be configured to perform a third computation function. In another example, the first processing region 212 can be configured to perform three instances of a first computation function, and the second processing region 214 can be configured to perform a second and third computation function. Similarly, the compute cores 220-232 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segment of one or more computation functions, or the like. The compute cores 220-232 of the plurality of processing regions 212-216 can each include one or more memory processing units, memory processing unit cores, or the like. The memory processing units and or cores can implement computation functions in arrays of memory cells without changing the basic memory array structure.
The memory processing unit 200 can further include an inter-layer-communication (ILC) unit 240. The ILC unit 240 can be global or distributed across the plurality of processing regions 212-216. In one implementation, the ILC unit 240 can include a plurality of ILC modules 242-246, wherein each ILC module can be coupled to a respective processing region 212-216. Each ILC module can also be coupled to the respective regions of the first memory 202-210 adjacent the corresponding respective processing regions 212-216. The inter-layer-communication unit 240 can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data.
The memory processing unit 200 can further include one or more input/output stages 242, 244. The one or more input/output stages 242, 244 can be coupled to one or more respective regions of the first memory 202-210. The one or more input/output stages 242, 244 can include one or more input ports, one or more output ports, and or one or more input/output ports. The one or more input/output stages 242, 244 can be configured to stream data into or out of the memory processing unit 200. For example, one or more of the input/output (I/O) stages can be configured to stream data into a first one of the plurality of regions of the first memory 202-210. Similarly, one or more input/output (I/O) stages can be configured to stream data out of a last one of the plurality of regions of the first memory 202-210.
The plurality of processing regions 212-216 can be configurable for memory-to-core dataflow from respective ones of the plurality of regions of the first memory 202-210 to one or more cores 220-232 within adjacent ones of the plurality of processing regions 212-216. The plurality of processing regions 212-216 can also be configurable for core-to-memory dataflow from one or more cores 220-232 within ones of the plurality of processing regions 212-216 to adjacent ones of the plurality of regions of the first memory 202-210. The plurality of processing regions 212-216 and plurality of regions of the first memory 202-210 can also be configured for memory-to-core-to-memory data flow. For example, the dataflow can be configured for a given direction from given ones of the plurality of regions of the first memory 202-210 through respective ones of the plurality of processing regions to adjacent ones of the plurality of regions of the first memory 202-210. In one implementation, the computation functions of compute cores and dataflow between processing regions 212-216 and first memory 202-210 can be organized to ensure adjacency requirements so that dataflow of shared data can be synchronized therebetween without a global centralized control unit.
The plurality of processing regions 212-216 can also be configurable for memory-to-core data flow from the second memory 218 to one or more cores 220-232 of corresponding ones of the plurality of processing regions 212-216. If the second memory 218 is logically or physically organized in a plurality of regions, respective ones of the plurality of regions of the second memory 218 can be configurably couplable to one or more compute cores in respective ones of the plurality of processing regions 212-216.
The plurality of processing regions 212-216 can be further configurable for core-to-core data flow between select adjacent compute cores 220-232 in respective ones of the plurality of processing regions 212-216. For example, a given core 224 can be configured to share data, accessed from an adjacent portion of the first memory 202, with one or more other cores 226-228 configurably coupled in series with the given compute core 224. In another example, a given core 220 can be configured to pass data accessed from the second memory 218 with one or more other cores 222 configurably coupled in series with the given compute core 220. In yet another example, a given compute core 220 can pass a result, such as a partial sum, computed by the given compute core 220, to one or more other cores 222 configurably coupled in series with the given compute core 220.
Referring to
The memory processing unit 300 can also include a second memory 318. The second memory 318 can be coupled to the plurality of processing regions 312-316. The second memory 318 can optionally be logically or physically organized into a plurality of regions. The plurality of regions of the second memory 318 can be associated with corresponding ones of the plurality of processing region 312-316. In addition, the plurality of regions of the second memory 318 can include a plurality of blocks organized in one or more macros. The first memory 302-310 can be volatile memory, such as static random-access memory (SRAM) or the like. The second memory can be non-volatile memory, such as resistive random-access memory (RRAM), magnetic random-access memory (MRAM), flash memory (FLASH) or the like. The second memory can alternatively be volatile memory.
The memory processing unit 300 can further include an inter-layer-communication (ILC) unit 340. The ILC unit 340 can be global or distributed across the plurality of processing regions 312-316. In one implementation, the ILC unit 340 can include a plurality of ILC modules, wherein each ILC module can be coupled to a respective processing regions 312-316. Each ILC module can also be coupled to the respective regions of the first memory 302-310 adjacent the corresponding respective processing regions 312-316. The inter-layer-communication unit 340 can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data. The inter-layer communication unit 340 can map the computations functions of compute cores and dataflow between processing regions 312-316 and first memory 302-310 on an adjacency basis so that dataflow of shared data can be synchronized therebetween.
The memory processing unit 300 can further include one or more input/output stages 348, 350. The one or more input/output stages 348, 350 can be coupled to one or more respective regions of the first memory 302-310. The one or more input/output stages 348, 350 can include one or more input ports, one or more output ports, and or one or more input/output ports. The one or more input/output stages 348, 350 can be configured to stream data into or out of the memory processing unit 300. For example, one or more of the input/output (I/O) stages can be configured to stream data into a first one of the plurality of regions of the first memory 302-310. Similarly, one or more input/output (I/O) stages can be configured to stream data out of a last one of the plurality of regions of the first memory 302-310.
The plurality of processing regions 312-316 can be configurable for memory-to-core dataflow from respective ones of the plurality of regions of the first memory 302-310 to one or more cores 320-332 within adjacent ones of the plurality of processing regions 312-316. The plurality of processing regions 312-316 can also be configurable for core-to-memory dataflow from one or more cores 320-332 within ones of the plurality of processing regions 312-316 to adjacent ones of the plurality of regions of the first memory 302-310. In one implementation, the dataflow can be configured for a given direction from given ones of the plurality of regions of the first memory 302-310 through respective ones of the plurality of processing regions to adjacent ones of the plurality of regions of the first memory 302-310.
The plurality of processing regions 312-316 can also be configurable for memory-to-core data flow from the second memory 318 to one or more cores 320-332 of corresponding ones of the plurality of processing regions 312-316. If the second memory 318 is logically or physically organized in a plurality of regions, respective ones of the plurality of regions of the second memory 318 can be configurably couplable to one or more compute cores in respective ones of the plurality of processing regions 312-316.
The plurality of processing regions 312-316 can be further configurable for core-to-core data flow between select adjacent compute cores 320-332 in respective ones of the plurality of processing regions 312-316. For example, a given core 324 can be configured to share data, accessed from an adjacent portion of the first memory 302, with one or more other cores 326-328 configurably coupled in series with the given compute core 324. In another example, a given core 320 can be configured to pass data, accessed from the second memory 318, with one or more other cores 322 configurably coupled in series with the given compute core 320. In yet another example, a given compute core 320 can pass a result, such as a partial sum, computed by the given compute core 320, to one or more other cores 322 configurably coupled in series with the given compute core 320.
The plurality of processing regions 312-316 can include one or more near memory (M) compute cores. The one or more near memory (M) compute cores can be configurable to compute neural network functions. For example, the one or more near memory (M) compute cores can be configured to compute vector-vector products, vector-matrix products, matrix-matrix products, and the like, and or partial products thereof.
The plurality of processing regions 312-316 can also include one or more arithmetic (A) compute cores. The one or more arithmetic (A) compute cores can be configurable to compute arithmetic operations. For example, the arithmetic (A) compute cores can be configured to compute merge operations, arithmetic calculations that are not supported by the near memory (M) compute cores, and or the like.
The plurality of input and output regions 348, 350 can also include one or more input/output (I/O) cores. The one or more input/output (I/O) cores can be configured to access input and or output ports of the memory processing unit (MPU) 300. The term input/output (I/O) core as used herein can refer to cores configured to access input ports, cores configured to access output ports, or cores configured to access both input and output ports.
The compute cores 320-332 can include a plurality of physical channels configurable to perform computations, accesses and the like, simultaneously with other cores within respective processing regions 312-316, and or simultaneously with other cores in other processing regions 312-316. The compute cores 320-332 of respective ones of the plurality of processing regions 312-316 can be associated with one or more blocks of the second memory 318. The compute cores 320-332 of respective ones of the plurality of processing regions 312-316 can be associated with respective slices of the second plurality of memory regions. The cores 320-332 can also include a plurality of configurable virtual channels.
Referring now to
One or more of the compute cores, and or one or more core groups of the plurality of processing regions 410-414 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segments of one or more computation functions, or the like. For example, a first computer core, a first core group 434 or a first processing region 410 can be configured to perform two computation functions, and a second computer core, second core group or second processing region 412 can be configured to perform a third computation function. In another example, a first compute core, the first core group 434 or the first processing region 410 can be configured to perform three instances of a first computation function, and a second compute core, second core group or second processing region 412 can be configured to perform a second and third computation function. In yet another example, a given computation function can have a size larger than the predetermined size of a compute core, core group or one or more processing regions. In such case, the given computation function can be segmented, and the computation function can be configured to be performed on one or more compute cores, one or more core groups or one or more of the processing regions 410-414. The computation functions can include, but are not limited to, vector products, matrix-dot-products, convolutions, min/max pooling, averaging, scaling, and or the like.
The memory processing unit 400 can also include one or more inter-layer communication (ILC) units 450-456. The ILC unit 450-456 can be global or distributed across the plurality of processing regions 410-414. In one implementation, the ILC unit 450-456 can include a plurality of ILC modules 450-456, wherein each ILC module can be coupled to adjacent respective processing regions 410-414. Each ILC module 450-456 can also be coupled to adjacent respective regions of the first memory 402-408. The inter-layer-communication units 450-456 can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data. Again, the inter-layer communication unit 450-456 can map the computation functions of compute cores and dataflow between processing regions 410-414 and first memory 402-408 based on adjacency so that dataflow of shared data can be synchronized therebetween.
The compute cores of the core groups 434-448 of the processing regions 410-414 can include a plurality of physical channels configurable to perform computations, accesses and the like, simultaneously with other cores within respective core groups 434-448 and or processing regions 410-414, and or simultaneously with other cores in other core groups 434-448 and or processing regions 410-414. The compute cores can also include a plurality of configurable virtual channels.
Relatively large flat memory regions such as the plurality of first memory regions described above with reference to
Referring now to
The memory processing unit 500 can also include a second memory 510. The second memory 510 can be coupled to the plurality of processing regions 410-414. The second memory 510 can optionally be logically or physically organized into a plurality of regions (not shown). The plurality of regions of the second memory 510 can be associated with corresponding ones of the plurality of processing region 410-414. In addition, the plurality of regions of the second memory 510 can include a plurality of blocks organized in one or more macros. The second memory can be non-volatile memory, such as resistive random-access memory (RRAM), magnetic random-access memory (MRAM), flash memory (FLASH) or the like. The second memory can alternatively be volatile memory.
One or more of the compute cores, and or one or more core groups of the plurality of processing regions 410-414 can be configured to perform one or more computation functions, one or more instances of one or more computation functions, one or more segments of one or more computation functions, or the like. For example, a first computer core, a first core group 434 or a first processing region 410 can be configured to perform two computation functions, and a second computer core, second core group or second processing region 412 can be configured to perform a third computation function. In another example, the first compute core, the first core group 434 or the first processing region 410 can be configured to perform three instances of a first computation function, and the second compute core, second core group or the second processing region 412 can be configured to perform a second and third computation function. In yet another example, a given computation function can have a size larger than the predetermined size of a compute core, core group or one or more processing regions. In such case, the given computation function can be segmented, and the computation function can be configured to be performed on one or more compute cores, one or more core groups or one or more of the processing regions 410-414. The computation functions can include, but are not limited to, vector products, matrix-dot-products, convolutions, min/max pooling, averaging, scaling, and or the like.
The dataflow can be configured by the one or more centralized or distributed control circuitry inter-layer communication (ILC) units 450-456 to flow between adjacent columnal interleaved processing regions 410-414 and memory regions 402-408 in a cross-columnal direction. In one implementation, one or more communication links can be coupled between the interleaved plurality of memory region 402-408 and plurality of processing regions 410-414. The one or more communication links can also be configured for moving data between non-adjacent ones of the plurality of memory regions 402-408, between non-adjacent ones of the plurality of processing regions 410-414, or between non-adjacent ones of a given memory region and a given processing region.
The plurality of processing regions 410-414 can be configurable for memory-to-core dataflow from respective ones of the plurality of regions of the first memory 402-408 to one or more cores within adjacent ones of the plurality of processing regions 410-414. The plurality of processing regions 410-414 can also be configurable for core-to-memory dataflow from one or more cores within ones of the plurality of processing regions 410-414 to adjacent ones of the plurality of regions of the first memory 402-408. In one implementation, the dataflow can be configured for a given direction from given ones of the plurality of regions of the first memory 402-408 through respective ones of the plurality of processing regions to adjacent ones of the plurality of regions of the first memory 402-408.
The plurality of processing regions 410-414 can also be configurable for memory-to-core data flow from the second memory 510 to one or more cores of corresponding ones of the plurality of processing regions 410-414. If the second memory 510 is logically or physically organized in a plurality of regions, respective ones of the plurality of regions of the second memory 510 can be configurably couplable to one or more compute cores in respective ones of the plurality of processing regions 410-414.
The plurality of processing regions 410-414 can be further configurable for core-to-core data flow between select adjacent compute cores in respective ones of the plurality of processing regions 410-414. For example, a given core can be configured to pass data accessed from an adjacent portion of the first memory 402 with one or more other cores configurably coupled in series with the given compute core. In another example, a given core can be configured to pass data accessed from the second memory 510 with one or more other cores configurably coupled in series with the given compute core. In yet another example, a given compute core can pass a result, such as a partial sum, computed by the given compute core to one or more other cores configurably coupled in series with the given compute core.
Again, relatively large flat memory regions, such as the plurality of first memory regions described above with reference to
The plurality of processing regions 410-414 can include one or more near memory (M) compute cores. The one or more near memory (M) compute cores can be configurable to compute neural network functions. For example, the one or more near memory (M) compute cores can be configured to compute vector-vector products, vector-matrix products, matrix-matrix products, and the like, and or partial products thereof.
The plurality of processing regions 410-414 can also include one or more arithmetic (A) compute cores. The one or more arithmetic (A) compute cores can be configurable to compute arithmetic operations. For example, the arithmetic (A) compute cores can be configured to compute merge operations, arithmetic calculations that are not supported by the near memory (M) compute cores, and or the like.
A plurality of input and output regions (not shown) can also include one or more input/output (I/O) cores. The one or more input/output (I/O) cores can be configured to access input and or output ports of the memory processing unit (MPU) 500. The term input/output (I/O) core as used herein can refer to cores configured to access input ports, cores configured to access output ports, or cores configured to access both input and output ports.
The compute cores can also include other types of compute cores such as graph processing cores or the like. The compute cores of the core groups 434-448 of the processing regions 410-414 can include a plurality of physical channels configurable to perform computations, accesses and the like, simultaneously with other cores within respective core groups 434-448 and or processing regions 410-414, and or simultaneously with other cores in other core groups 434-448 and or processing regions 410-414. The compute cores can also include a plurality of configurable virtual channels.
The plurality of memory regions 402-408 can also be organized into a plurality of memory blocks arranged in a plurality of columns and rows for each memory region 402-408. For example, each given memory region 404 can be organized into a plurality of memory blocks of m blocks wide and n blocks long, wherein m and n can be different or equal. A fetch unit for a respective processing region, core group or compute core can be configured to fetch from sets of memory blocks of respective adjacent memory regions. Similarly, a write back unit for a respective processing region, core group or compute core can be configured to write back to a set of memory blocks of respective adjacent memory regions. The organization of the plurality of memory blocks in a plurality of columns and rows can provide further increased memory bandwidth for increased performance. The organization of the plurality of memory blocks arranged in a plurality of columns and rows is further explained below with reference to
In accordance with aspects of the present technology, a neural network layer, a part of a neural network layer, or a plurality of fused neural network layers can be mapped to a single cluster of compute cores or a core group as a mapping unit. A cluster of compute cores is a set of cores of a given processing region that are configured to work together to compute a mapping unit. For example, the nodes of a first layer 610 of a neural network can be mapped as a mapping unit to a first set of compute cores, the nodes of a second layer 620 can be mapped to a second set of compute cores, while the node of a third layer 630 can be mapped to a third set of compute cores, as illustrated in
Referring now to
Referring now to
The dataflow 930 from the second memory 970 to the compute cores of the processing regions can also be configured. In one implementation, the dataflow from the second memory 970 to the compute cores 950-956 can provide a direct route to access kernel data, weight data or the like. The dataflow 940 between the compute cores 950-956 can also be configured. In one implementation, the dataflow between the compute cores 950-956 can provide for the sharing of data from the second memory with others of the compute cores 950-956 in a corresponding core group and or processing region.
The plurality of processing regions can include one or more near memory (M) compute cores, one or more arithmetic (A) compute cores, and one or more input/output (I/O) cores. The one or more near memory (M) compute cores can be configurable to compute neural network functions. The one or more arithmetic (A) compute cores can be configurable to compute arithmetic operations. The one or more input/output (I/O) cores can be configured to access input and or output ports of the memory processing unit (MPU).
Referring now to
The multiply-and-accumulate (MAC) array unit 1010 can be configured to compute neural network functions. For example, the multiply-and-accumulate (MAC) array unit 1010 can be configured to compute vector-vector products, vector-matrix products, matrix-matrix products, and the like, and or partial products thereof. The multiply-and-accumulate (MAC) array unit 1010 can also be configured to perform pre-channel and bias scaling. In one implementation, the multiply-and-accumulate (MAC) array unit 1010 can be configured to perform main operations such as, but not limited to, dense or fully connected convolutions, two-dimensional convolutions, depth-wise convolutions, and separable convolutions. The multiply-and-accumulate (MAC) array unit 1010 can also be configured to perform fused operations such as, but not limited to, max pooling, average pooling, rectify linear (ReLU) activation, ReLU-x activation, and up-sampling. The multiply-and-accumulate (MAC) array unit 1010 can also be configured to perform virtually fused operations such as, but not limited to, zero padding (folded into kernel corners), average pooling (folded into weights and biases), ReLU activation, ReLU-x activation, and up-sampling.
The writeback unit 1015 can be configured to write data to an N+1th portion of the first memory for the multiply-and-accumulate (MAC) array unit 1010. The writeback unit 1015 can also be configured to synchronize data movement the Nth portion of the first memory with the inter-layer-communication (ILC) unit. In one implementation, the writeback unit 1015 can be configured to perform a fuse operation, send data to an adjacent region of the first memory or adjacent compute core in the respective processing region, and to increment an inter-layer-communication (ILC) counter.
The switch 1020 can configure memory accesses, and chain directions and interfaces of the fetch unit and writeback units to ports of the respective near memory (M) compute core based on configuration information. The switch 1020 can be preconfigured with memory access and chain directions. The switch 1020 can therefore interface the fetch 1005 and writeback units 1015 based on the data-flow configuration.
The near memory (M) compute core 1000 can include a plurality of physical channels configurable to perform computations simultaneously. The near memory (M) compute core 1000 can also be associated with one or more blocks of the second memory. The physical channels of the near memory (M) compute core 1000 can be associated with respective slices of the second plurality of memory regions. The near memory (M) compute core 1000 can also include a plurality of configurable virtual channels.
Referring now to
The arithmetic unit 1110 can be configured to compute arithmetic operations not supported by the multiply accumulate (MAC) array unit 1010. For example, the arithmetic unit 1110 can be configured to compute merge operations and or the like. The arithmetic unit 1110 can compute one or more output channels at a time. The arithmetic unit 1110 may not have access to the second memory. The arithmetic unit 1110 may have no means to pass data between adjacent cores in the same processing region. In one implementation, the arithmetic unit 1110 can be configured to perform main operations such as, but not limited to, add, multiply and bypass. The arithmetic unit 1110 can also be configured to fuse operations such as, but not limited to, ReLU activation, ReLU-x activation, and leaky ReLU-x activation.
The writeback unit 1115 can be configured to write data to an N+1th portion of the first memory for the arithmetic unit 1110. The writeback unit 1115 can also be configured to synchronize data movement the Nth portion of the first memory with the inter-layer-communication (ILC) unit. In one implementation, the writeback unit 1115 can be configured to perform a fuse operation, send data to an adjacent region of the first memory or an adjacent compute core in the respective processing region, and to increment an inter-layer-communication (ILC) counter.
The switch 1120 can be configured to configure memory accesses, chain directions and interfaces of the fetch unit and writeback units to ports of the arithmetic compute core based on configuration information.
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
Referring now to
In one implementation, data can be shared between processing regions by assigning a large enough buffer in the corresponding portion of the first memory. For example, the buffer can be allocated to carry a whole feature map shared between adjacent processing regions. The size of the buffer can be calculated in accordance with Equation 1:
S
b=Π∀iF[i]
where F is the vector of the feature map size.
However, assigning the whole feature map size as a buffer is not enough for the data to flow. Consumers need to avoid reading a buffer entry that is not filled yet by the producer. Assuming a coarse-grain synchronization of the feature map row level, the consumer cannot read from a feature map row that is still being produced. For the sake of simplicity, each feature map row will be illustrated as a single buffer entry in
In another implementation, a smaller partial buffer can be sufficient for the dataflow to support the computations. For example, a circular queue can be utilized as a partial buffer. The partial buffer can be configured to carry enough data for the consumer to operate and have extra entries to allow the producer to generate data while the consumer is working. For example, the partial buffer can include three feature map rows in the case where the consumer is performing a convolution having a 3×3 kernel size. The partial buffer can also include extra entries, referred to as a pipeline margin. Without such a margin, the dataflow performance will degrade significantly since the producer and consumer will not be able to work concurrently. The producer also cannot overwrite data that is not yet consumed, and the consumer needs to wait for the producer to finish writing a new row in the partial buffer before starting to consume it. Referring now to
Referring now to
For ease of explanation, aspects of the present technology have been described with regard to a single producing cluster and a single consuming cluster. However, dataflow in the memory processing unit (MPU) can involve dataflow branching into multiple paths that can for example end as different outputs, merge again, and the like. While branching output can be treated the same as multiple single dataflow paths, merging branches can involve additional considerations. If a neural network with merging branches, for example, is not allocated the correct buffer size, the dataflow pipeline might end up in a deadlock or produce incorrect data. With data having multiple consumers, the data validity should be set by the slowest consumer. Typically, a longer data lifetime results in a need for a larger buffer size. Referring now to
Referring now to
The inter-layer-communication (ILC) unit can be configured to synchronize data movement between one or more compute cores producing given data and one or more other compute cores consuming the given data. Data communication within the memory processing unit can include direct and indirect connections between two modules. Direct synchronization can be implemented by direct wire connections with a producer/consumer handshake. The direct synchronization can be implemented by polymorphic connections between compute cores.
The inter-layer-communication unit can also synchronize indirect connections between two modules. Indirect synchronization can be implemented by use of a buffer between two modules. Indirect synchronization by the inter-layer-communication unit can be implemented as communication between compute cores and volatile memory (e.g., SRAM). In such an implementation, a producer compute core can write to a shared buffer in a corresponding first memory region and a consumer compute core can read from the shared buffer. The data can be synchronized to avoid data hazards that can occur in the buffer. Exemplary data hazards can include a producer core overwriting data to a buffer before a consumer core can read data from the buffer, or a consumer core reading data from a buffer before the producer core can write the data to the buffer. In one implementation, indirect synchronization can be implemented by the compute cores sending appropriate signals to the buffer to provide visible synchronization. In visible indirect synchronization, the buffers between the compute cores can act as a simple memory used for writing and reading data. The producer core can be configured to ensure that the consumer core is ready for data, and the consumer core can be configured to ensure that there is enough data in the memory so that it can perform a computation operation.
In another implementation, indirect synchronization can be implemented by the ILC unit to provide invisible synchronization. In the invisible indirect synchronization the ILC unit is responsible for keeping producer compute cores and consumer compute cores in synchronization.
Referring now to
In one implementation, data flow between compute cores 3715-3725 of one or more of a plurality of processing regions and corresponding adjacent ones of the plurality of regions of the first memory 3705 can be configured utilizing direct synchronization between the compute cores and the first memory. Similarly, data flow between the second memory (not shown) and the compute cores 3715-3755 of the one or more of the plurality of processing regions can be configured utilizing direct synchronization between the compute cores 3715-3755 and the second memory. Data flow between compute cores 3715-3725 within respective ones of the one or more of the plurality of processing regions can also be configured utilizing direct synchronization between adjacent compute cores within the respective processing region.
The inter-layer-communication (ILC) unit 3760-3765 can synchronize data movement between one or more compute cores 3715-3725 producing given data and one or more other compute cores 3730-3740 consuming the given data utilizing indirect invisible synchronization. Data movement synchronization by the inter-layer-communication (ILC) unit 3760-3765 will be further described with reference to
At 3820, the inter-layer-communication (ILC) unit 3760-3765 can track read and write accesses to the respective buffers of respective ones of the plurality of regions of the first memory. In one implementation tracking is done on a coarse grain level, such as a whole feature-map row level. In one implementation, the inter-layer-communication (ILC) unit 3760-3765 can track access to respective buffers with corresponding respective indexes to point to an ILC entry. The inter-layer-communication (ILC) unit 3760-3765 does not need to store buffer region boundaries or other information about the buffer. Instead, the compute cores 3715-3755 can be responsible for accessing the correct ILC entry index that corresponds to a respective shared buffer. In one implementation, an identifier of a given compute core 3720 received in an synchronization command can be mapped to a count associated with a given region (e.g., buffer) of a given portion of the first memory 3705.
Referring now to
At 3830, the inter-layer-communication (ILC) unit 3760-3765 can control access to the buffers of the respective one of the plurality of regions of the first memory 3750, 3710 by the respective compute cores 3715-3755 based on the respective read and write accessing tracking associated with the buffers. In one implementation, the inter-layer-communication (ILC) unit 3760-3765 can allow or block requests to a corresponding respective shared buffer 3770 from one or more respective producer compute cores 3720 and one or more respective consumer compute cores 3740 based on the corresponding ILC entry index. For example, the inter-layer-communication (ILC) unit 3760-3765 can allow write access to a respective shared buffer 3770 as long as the current unit count (ic) in the corresponding ILC entry index is less than the maximum count (ix). If the given write access is allowed, the inter-layer-communication (ILC) unit 3760-3765 increments the current unit count (ic) by an amount of units (i+) for the given write access, as illustrated in
Referring now to
Referring now to
Referring now to
Referring to
Storage of weight arrays in the memory macros, in accordance with aspects of the present technology, can be configured to improve the performance of the memory processing unit (MPU). One or more memory macros can be configured to store all the weights needed for access by the compute cores of a given group. The one or more memory macros can be configured to provide enough memory access bandwidth for the compute cores in a given group. The memory macros can be optimized for read access by the compute cores. The number of internal memory banks, arrangement and the like of the memory can be transparent to the architectural design of the memory processing unit (MPU).
Referring again to
Referring now to
At 4520, the 4-dimension array, expanded 3-dimension array or expanded 2-dimension array can be quantized, as illustrated in
At 4530, the filters of the quantized array can be unrolled and the bias value and scaling exponent can be appended, as illustrate in
At 4540, the unrolled and appended filters can be reshaped to fit into a physical channel of a memory, as illustrated in
At 4550, the reshaped filters can be rotated, as illustrated in
Again, organizing each of the memory regions into a plurality of memory blocks and coupling a core group of a respective processing region to a set of memory blocks that are proximate to the given core group, while not coupled to memory blocks in the adjacent memory regions that are distal from the given core group, can increase memory bandwidth throughput. Providing more, but smaller, flat memory bocks by organizing each of the plurality of memory regions into respective set of a plurality of memory blocks can provide increased memory bandwidth for increased performance. Further increasing the number of memory blocks in each of the plurality of first memory regions can further increase the memory bandwidth. Referring to
Referring again to
Compute functions such as the reshape function can be implemented by the control circuitry and or inter-layer communication (ILC) units 450-456. Reshaping can be supported by adjusting corresponding increment and decrement counts of the inter-layer communication unit. For example, the increment count can be set to +4 and the decrement count can be set to −6 to reshape a 6×4 producer output to a 4×6 consumer input in a per row ILC synchronization scheme, as illustrated in
The sigmoid function can be approximated in the compute cores using the piecewise equation:
as illustrated in
Generally, feature maps can be encoded as integer data, B-float data, group B-float or the like. Referring now to
In another implementation, the feature map pixels can be encoded as Brian Floating Point (B-float) values, including a base and exponent. For example, the feature map pixels can be represented by 16 bits, including an 8 bit signed fraction and 8 bit exponent. The 8 bit signed fraction can include a sign bit, 7 explicitly stored faction bits and 1 hidden fraction bit, as illustrated in
In yet another implementation, the feature map pixels can be represented by as B-float values, wherein each group of n-channels of pixels have their own dynamic exponent. The n-channels should be less than or equal to the number of physical channels. B-float encoding, wherein groups of n-channels are encoded with a given dynamic exponent is referred to herein as Group B-float encoding. Group B-float encoding advantageously does not need a pilot data phase, and advantageously adapts to runtime conditions. In most cases, Group B-float encoding can advantageously utilize substantially the same memory storage and memory bandwidth as integer encoded data, by storing the group B-float encoded data, in accordance with aspects of the present technology.
Referring now to
Referring now to
It is to be appreciated that bandwidth utilized for receiving the elements of the first matrix A by the multiply-and-accumulate (MAC) can be reduced by reusing the reusing the corresponding element of matrix A. The plurality of accumulators of the multiply-and accumulate (MAC) 6900 enable the corresponding element of matrix A to be reused, and therefore do not need to be reloaded thereby reducing bandwidth utilization and/or reducing energy consumption associated with loading data from memory into the MAC. The width of the compute cores (e.g., the number of accumulators of the multi-accumulator MAC and the number of channels) can be balanced against the number of compute cores to optimize the utilization of the compute cores. In an exemplary implementation, a compute core can include 64 physical channels, and the compute core can include 16 multi-accumulator MAC units. Therefore, each multi-accumulator MAC unit can include 4 accumulators.
Referring now to
Operation of the near memory (M) compute core including a plurality of multi-accumulator multiply-and-accumulate (MAC) units will be further explained with reference to
At 7120, corresponding multiply and accumulate operations can be performed using the current weight value and respective ones of the plurality of current input feature values to generate corresponding current accumulated values by the respective multiply and accumulate units during the first cycle (T=0). Therefore, during the first cycle (T=0), the current weight value is reused in the plurality of multiply and accumulate units. For example, a first multiply and accumulate unit can accumulate the product of the weight value (0,0,0) and the input feature map value (0,0,0), and the second multiply and accumulate unit can accumulate the product of the weight value (0,0,0) and the adjacent input feature map value (0,1,0) during the first cycle. The weight value (0,0,0) is loaded once from memory and used by the first multiply and accumulate unit, and also “reused” by the second multiply and accumulate unit without the need to reload the value from memory.
At 7130, the operations at 7110 and 7120 can be iterated through corresponding input channels of the input feature map and corresponding input channels of the weights. At 7140, the operations at 7110-7130 can be iterated through the kernel height and kernel width of the weights, and the corresponding map width and map height in the input feature map. For example, at a second cycle (T=1), a second weight value (0,1,0) can be loaded from memory and third and fourth input feature map values (0,1,0) and (0,2,0) can be loaded from the memory. The product of the current weight value and the current respective input feature map values can be added to the accumulated value from the first cycle.
After iterating through the kernel height and kernel width of the weights and corresponding map width and map height in the input feature map, the corresponding current accumulated values from the respective multiply and accumulate units can be output as corresponding output feature map values, at 7150. For example, at cycle R×C×S, the accumulated value of the first multiply and accumulate unit can be output as a corresponding output feature map value (1,1,0), and the accumulated value in the second multiply and accumulate unit can be output as a corresponding output feature map value (1,2,0) in a first output channel of the output feature map.
At 7160, the current accumulated values in the respective multiply and accumulate units can be reset, and the operations at 7110-7150 can be iterated through the map width and map height of the input feature map and the corresponding kernel height and kernel width of the weights. At 7170, the operations at 7110-7160 can be iterated through the filters of the weights to generate the complete output feature map.
Data reuse in the multiply-and-accumulate (MAC) units advantageously keep the units busy. The increased utilization, increases performance (e.g., frames per second (FPS)), and or reduces energy consumption (e.g., increases frame per second processing per Watt of energy consumed by the compute cores (FPS/W)). The near memory (M) compute cores, in accordance with aspects of the present technology, can advantageously be configured to parallelize computation over output channels. Input feature map data can be shared among output channel computations. The plurality of accumulators for each multiply-and-accumulate (MAC) unit to store extra partial sums, reduces the input feature map bandwidth utilization be reusing a single input feature map over many cycles.
The memory processing unit, in accordance with aspects of the present technology, can support weight having a first or second precision. Referring now to
Referring now to
At 7510, memory balancing can consider the impact on memory based on the number of parameters and the number of layers for a number of different neural-network models. In an exemplary implementation, the neural network models can include MobileNet, MobileNetV2, EfficientNet B2, NASNet, InceptionV3, ResNet 50 and other similar neural networks, as illustrated in
At 7520, bandwidth balancing can consider the impact of core utilization and memory reuse. A design goal can be to allow as much as 100% utilization of the core time. The core time utilization depends upon several factor including good mapping. However, with reference to
At 7530, the impact of design point can be considered. For the memory processing unit (MPU), the number of computer towers, communication towers, core groups, feature memory blocks, inter-layer-communication (ILC) unit, input port, output port, total weight memory, total feature memory and the like can be considered. For the core group, the number of near-memory (M) cores, the number of arithmetic (A) cores, the weight memory, the weight memory data bus width, the weight memory address size, the number of weight memory arbiters, the number of feature memory read ports, the number of feature memory write ports, the feature memory address size per port, the number of feature map arbiters, the number of ILC ports, the number of ILC arbiters and the like can be considered. For the feature map memory, the memory size, the number of read ports, the number of parallel accesses, the banks, the address domain, the address granularity, the address bit length the data width, the expected latency, the number of ports, the number of parallel access and the like can be considered. For the weight memory. The memory size, the total entries, the number of read ports, the number of write ports, the number of parallel access, the banks, the address domain, the address granularity, the address/port bit width, the data/ort bit width, the expected latency and the like can be considered. For the near-memory (M) cores, the number of MAC units, the number of accumulators of the MAC units, the number of physical channels, the number of weight memory buffer, the number of feature map memory buffers, the number of weight memory port, the number of feature map memory ports, the number of feature map writeback ports, the number of ILC ports, the MAC sequence, the zero skipping, the group B-float and the like can be considered. For the arithmetic (A) cores, the number of ALUs, the number of physical channels, the number of output buffers, the number of feature map memory buffers, the number of feature map memory fetch ports, the number of ILC ports and the like can be considered. For the inter-layer-communication (ILC) unit, the number of ports, number of parallel access, the maximum layers per ILC the index bit with, the index domain, the number of bit of the counters, the access and the like can be considered.
At 7540, the support operations can be considered. The supported operation for 2D convolutions can include convolution, depthwise convolution, separable convolution, pointwise convolution, MP convolution, deconvolution and the like. For fully connected, dense operations and the like can be supported. Supported scaling operations can include max pooling, average pooling, global average pooling, up-sampling, zero-pading and the like. For branch and merge, supported operations can include, add, subtract, multiply, maximum, minimum, concatenate and the like. For activation, supported operations can include linear, ReLU, Leaky ReLU, P-ReLU, Threshold ReLU, ReLU-Max, ReLu-6, sigmoid/logistic, Hyperbolic Tangent, Mish, Swish and the like. Other supported operations can include flatten, reshape, batch normalization, rescale, constant add, constant multiply, clip and the like.
At 7550, target models can be considered. Target models can include MobileNet including versions 1, 2 and 3, ResNet including 34, 50 and 101, SSD, Tiny Yolo including versions 1 and 3, Yolog including versions 3 and 4, OpenPose, Xception, Inception, EfficientNet and the like. At 7560, supported memory organizations can also be considered. For feature memories, each feature map entry can be made of one exponent and 8 mantissas. The mantissas may not include any hidden one, as the original B-float format. The exponent can be 8-bits and the mantissa can be 9-bits. A successive number of entries can have the same exponent depending on the group-B float size. For the weight memory, the precision can be 8 or 4 bits.
The foregoing descriptions of specific embodiments of the present technology have been presented for purposes of illustration and description. They are not intended to be exhaustive or to limit the present technology to the precise forms disclosed, and obviously many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the present technology and its practical application, to thereby enable others skilled in the art to best utilize the present technology and various embodiments with various modifications as are suited to the particular use contemplated. It is intended that the scope of the invention be defined by the claims appended hereto and their equivalents.
This application claims the benefit of U.S. Provisional Patent Application No. 63/310,031 filed Feb. 14, 2022, which is incorporated herein in its entirety.
Number | Date | Country | |
---|---|---|---|
63310031 | Feb 2022 | US |