The present disclosure generally relates to compute-in-memory circuits, and more specifically to compute-in-memory processors supporting both general-purpose CPU and deep learning.
While some progress has been made for compute-in-memory (CIM) techniques, for end-to-end operations of AI-related tasks, a general-purpose computing unit, e.g. CPU, is not only mandatory but also often dominates the total latency due to significant pre/post-processing, data movement/alignment and versatile non-MAC tasks. Conventional architecture which engages the CPU core, ASIC/CIM accelerator, and a deep memory access (DMA) engine for data transfer can suffer from processor stall and underutilization issues. Deep neural network (DNN) computing often takes only 12%-50% of total run time leaving performance bottlenecked by CPU processing and data transfer. For instance, 83% of run time was spent by CPU for data movement and preparation from the recent augmented reality/virtual reality system-on-a-chip (AR/VR SoC). Unfortunately, current CIM developments do not support general-purpose CPU and do not address the needed improvements for CPU related processing and data transfer.
The description provided in the background section should not be assumed to be prior art merely because it is mentioned in or associated with the background section. The background section may include information that describes one or more aspects of the subject technology.
According to certain aspects of the present disclosure, a compute-in-memory processor is provided. The compute-in-memory processor includes central computing units configured to operate in a central processing unit mode and a deep neural network mode. A data activation memory and a data cache output memory are in communication with the compute-in-memory processor. In the deep neural network mode, the data activation memory is configured as input memory and the data cache output memory is configured as output memory. In the central processing unit mode, the data activation memory is configured as a first data cache and the data cache output memory is configured as a register file and a second data cache.
According to another aspect of the present disclosure, a method is provided. The includes operating central computing units in a central processing unit mode; wherein, in the central processing unit mode, a data activation memory in communication with the central computing units is configured as a first data cache and a data cache output memory, in communication with the central computing units, is configured as a register file and a second data cache. The method includes selectively operating the central computing units from the central processing unit mode to a deep neural network mode, wherein, in the deep neural network mode, the data activation memory is configured as input memory and the data cache output memory is configured as output memory.
According to other aspects of the present disclosure, a compute-in-memory processor is provided. The compute-in-memory processor includes central computing units configured to operate in a central processing unit mode and a deep neural network mode. A first level cache and a second level cache are in communication with the central processing units. The second level cache includes a plurality of SRAM banks. In the central processing unit mode, a second level compute-in-memory in communication with the second level cache is configured as an instruction controlled compute-in-memory core. In the deep neural network mode, the second level compute-in-memory is configured as a weight memory for compute-in-memory MAC operations.
It is understood that other configurations of the subject technology will become readily apparent to those skilled in the art from the following detailed description, wherein various configurations of the subject technology are shown and described by way of illustration. As will be realized, the subject technology is capable of other and different configurations and its several details are capable of modification in various other respects, all without departing from the scope of the subject technology. It should be noted that although various aspects may be described herein with reference to healthcare, retail, educational, or corporate settings, these are examples only and are not to be considered limiting. The teachings of the present disclosure may be applied to any mobile device environments, including but not limited to home environments, healthcare environments, retail environments, educational environments, corporate environments, and other appropriate environments. Accordingly, the drawings and detailed description are to be regarded as illustrative in nature and not as restrictive.
The disclosure is better understood with reference to the following drawings and description. The elements in the figures are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the disclosure. Moreover, in the figures, like-referenced numerals may designate to corresponding parts throughout the different views.
In one or more implementations, not all of the depicted components in each figure may be required, and one or more implementations may include additional components not shown in a figure. Variations in the arrangement and type of the components may be made without departing from the scope of the subject disclosure. Additional components, different components, or fewer components may be utilized within the scope of the subject disclosure.
The detailed description set forth below is intended as a description of various implementations and is not intended to represent the only implementations in which the subject technology may be practiced. As those skilled in the art would realize, the described implementations may be modified in various different ways, all without departing from the scope of the present disclosure. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive.
In certain aspects of the present disclosure, a general-purpose compute-in-memory (GPCIM) processor combining DNN operations and vector CPU is provided. Utilizing special reconfigurability, dataflow, and instruction set, a 65 nm test chip, for example, demonstrates a 28.5 TOPS/W DNN macro efficiency and a best-in-class peak CPU efficiency of 802GOPS/W. Benefit from a data locality flow, 37% to 55% end-to-end latency improvement on AI-related applications is achieved by eliminating inter-core data transfer.
As schematically illustrated in
With reference to
Many traditional instruction-based compute-in-memory (CIM) works, such as the traditional CIM work 66, are focusing on implementing the ALU logic in/near the first level SRAM storage such as L1 cache and register files (RF). However, modern processors for general-purpose computing contain multiple levels of caches. The data movement through cache hierarchy is one significant portion of CPU processing, especially for ML/AI tasks which have tremendous input/weight data. While the existing conventional instruction based CIM works can dramatically improve the energy efficiency of CPU pipeline core, the ignored heavy data movement through cache hierarchy holds back the whole CPU system efficiency. In contrast, the L2 CIM architecture 68 of the present disclosure optimizes the CPU processing efficiency including the cache system. As for recent large AI/ML tasks, the conventional L1 cache and RF are not enough to store the input/weight data, which means the majority of the data for ML/AI computing is stored in the low-level caches such as L2 cache 70. L2-CIM architecture 68 can process majority of the data locally at L2 cache 70 level instead of moving all data to L1. As shown in
The disclosed L2-CIM 69 fully utilizes the potential maximum bandwidth and large SRAM size of the L2 cache 70 for vectorized parallel in-memory computing to reach superior performance and energy efficiency improvement over conventional GP-CIM works.
For example, in cache hierarchy of traditional CPU architecture, in order to achieve higher hit rate and reduce the averaged memory access time, low level cache, such as L2 cache, has more SRAM banks with larger bank size for high associativity compared with L1 cache. In that case, the potential memory bandwidth of L2 cache is significantly large. In the L2 CIM architecture 68, CIM 69 in L2 cache 70 can fully utilize the potential L2 bandwidth and L2 cache size by cooperating with special designed replacement policy and customized RISC-V vector set extension 80. A representative example is shown in
The L2-CIM architecture 68 enables the low-level cache (e.g., the L2 cache 70) to support general-purpose vector CIM processing (e.g., the vector processing mode 80) and accelerator processing (e.g., the accelerator mode 82). It enables a wide range of both ML/AI and vector processing applications with high system energy efficiency.
For example, in the vector processing mode 80, the L2 CIM 69 part is an instruction controlled CIM core. The processing is finished in the near-array vector ALU groups 70. The vector processing mode 80 is designed for the general-purpose vector computing waves which has more complicated computing kernel than the MAC operation in ML. The accelerator mode 82 reconfigures the L2 CIM 69 part to a weight memory for CIM MAC operations. The near-array vector ALU groups 70 can be reconfigured to adder tree groups (e.g., plurality of adder trees 52) for partial sum accumulation. Instead of finishing all the processing work in the near-array logic groups, the accelerator mode 82 realizes 1b multiplication inside the bitcell and accumulates the column results using bitline. The accelerator mode 82 is implemented for the most efficient ML model inference of AI applications for this L2 CIM architecture 68.
In one aspect, a method may be an operation, an instruction, or a function and vice versa. In one aspect, a clause or a claim may be amended to include some or all of the words (e.g., instructions, operations, functions, or components) recited in either one or more clauses, one or more words, one or more sentences, one or more phrases, one or more paragraphs, and/or one or more claims.
To illustrate the interchangeability of hardware and software, items such as the various illustrative blocks, modules, components, methods, operations, instructions, and algorithms have been described generally in terms of their functionality. Whether such functionality is implemented as hardware, software or a combination of hardware and software depends upon the particular application and design constraints imposed on the overall system. Skilled artisans may implement the described functionality in varying ways for each particular application.
As used herein, the phrase “at least one of” preceding a series of items, with the terms “and” or “or” to separate any of the items, modifies the list as a whole, rather than each member of the list (e.g., each item). The phrase “at least one of” does not require selection of at least one item; rather, the phrase allows a meaning that includes at least one of any one of the items, and/or at least one of any combination of the items, and/or at least one of each of the items. By way of example, the phrases “at least one of A, B, and C” or “at least one of A, B, or C” each refer to only A, only B, or only C; any combination of A, B, and C; and/or at least one of each of A, B, and C.
The word “exemplary” is used herein to mean “serving as an example, instance, or illustration.” Any embodiment described herein as “exemplary” is not necessarily to be construed as preferred or advantageous over other embodiments. Phrases such as an aspect, the aspect, another aspect, some aspects, one or more aspects, an implementation, the implementation, another implementation, some implementations, one or more implementations, an embodiment, the embodiment, another embodiment, some embodiments, one or more embodiments, a configuration, the configuration, another configuration, some configurations, one or more configurations, the subject technology, the disclosure, the present disclosure, other variations thereof and alike are for convenience and do not imply that a disclosure relating to such phrase(s) is essential to the subject technology or that such disclosure applies to all configurations of the subject technology. A disclosure relating to such phrase(s) may apply to all configurations, or one or more configurations. A disclosure relating to such phrase(s) may provide one or more examples. A phrase such as an aspect or some aspects may refer to one or more aspects and vice versa, and this applies similarly to other foregoing phrases.
A reference to an element in the singular is not intended to mean “one and only one” unless specifically stated, but rather “one or more.” The term “some” refers to one or more. Underlined and/or italicized headings and subheadings are used for convenience only, do not limit the subject technology, and are not referred to in connection with the interpretation of the description of the subject technology. Relational terms such as first and second and the like may be used to distinguish one entity or action from another without necessarily requiring or implying any actual such relationship or order between such entities or actions. All structural and functional equivalents to the elements of the various configurations described throughout this disclosure that are known or later come to be known to those of ordinary skill in the art are expressly incorporated herein by reference and intended to be encompassed by the subject technology. Moreover, nothing disclosed herein is intended to be dedicated to the public regardless of whether such disclosure is explicitly recited in the above description. No claim element is to be construed under the provisions of 35 U.S.C. § 112, sixth paragraph, unless the element is expressly recited using the phrase “means for” or, in the case of a method claim, the element is recited using the phrase “step for”.
While this specification contains many specifics, these should not be construed as limitations on the scope of what may be claimed, but rather as descriptions of particular implementations of the subject matter. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination. Moreover, although features may be described above as acting in certain combinations and even initially claimed as such, one or more features from a claimed combination can in some cases be excised from the combination, and the claimed combination may be directed to a subcombination or variation of a subcombination.
The subject matter of this specification has been described in terms of particular aspects, but other aspects can be implemented and are within the scope of the following claims. For example, while operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. The actions recited in the claims can be performed in a different order and still achieve desirable results. As one example, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Moreover, the separation of various system components in the aspects described above should not be understood as requiring such separation in all aspects, and it should be understood that the described program components and systems can generally be integrated together in a single software product or packaged into multiple software products.
The title, background, brief description of the drawings, abstract, and drawings are hereby incorporated into the disclosure and are provided as illustrative examples of the disclosure, not as restrictive descriptions. It is submitted with the understanding that they will not be used to limit the scope or meaning of the claims. In addition, in the detailed description, it can be seen that the description provides illustrative examples and the various features are grouped together in various implementations for the purpose of streamlining the disclosure. The method of disclosure is not to be interpreted as reflecting an intention that the claimed subject matter requires more features than are expressly recited in each claim. Rather, as the claims reflect, inventive subject matter lies in less than all features of a single disclosed configuration or operation. The claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.
The claims are not intended to be limited to the aspects described herein, but are to be accorded the full scope consistent with the language claims and to encompass all legal equivalents. Notwithstanding, none of the claims are intended to embrace subject matter that fails to satisfy the requirements of the applicable patent law, nor should they be interpreted in such a way.
The present application claims the benefit of priority under 35 U.S.C. § 119 from U.S. Provisional Patent Application Ser. No. 63/506,771 entitled “Compute-in-Memory Processor Supporting Both General-Purpose CPU and Deep Learning,” filed on Jun. 7, 2023, the disclosure of which is hereby incorporated by reference in its entirety for all purposes.
This invention was made with government support under grant number CCF-2008906 awarded by the National Science Foundation. The government has certain rights in the inventions.
Number | Date | Country | |
---|---|---|---|
63506771 | Jun 2023 | US |