Examples generally relate to a system level compute and memory architecture that may integrate different technologies and/or different variations of the hardware architectures. In particular, examples include a hierarchy of closely connected circuits (e.g., compute-in-memory (CiM), compute-near-memory (CnM) and compute-outside-of-memory (CoM)) to process and store data to execute computations.
Machine learning (e.g., neural networks, deep neural networks, etc.) workloads may include a significant amount of operations. For example, machine learning workloads may include numerous nodes that each execute different operations. Such operations may include General Matrix Multiply operations, multiply-accumulate operations, etc. The operations may consume memory and processing resources to execute, and occur in different data formats.
The various advantages of the embodiments will become apparent to one skilled in the art by reading the following specification and appended claims, and by referencing the following drawings, in which:
CiM elements (e.g., circuitry) may accelerate artificial intelligence (AI) and/or machine learning (ML) applications and compute by avoiding and/or mitigating memory bottlenecks. CiM accelerators may achieve efficiency due to considerable reduction in data movements between the memory and the compute units. CiM architectures may seek to achieve lower power, resolve memory bottlenecks and/or implement AI in battery operated and/or power-constrained devices. Existing CiM architectures may include analog based cores using static random-access memories (SRAMs) or other memory technologies such as magnetoresistive random-access memories (MRAMs), resistive random-access memories (RRAMs) etc. CiM architectures may be homogenous in nature. That is, CiM architectures may be analog-based pure compute-in-memory, while in examples of digital-based compute near memory, logic is positioned very close to the memory. Previously existing implementations may not integrate various levels of CiM architectures resulting inefficiency.
Digital architectures may include CnM architectures, where the compute units of the CnM are positioned proximate to the memory. Thus, CiM architectures may operate in an analog domain and perform a first set of functions, while CnM architectures may operate in a digital domain and perform a second set of functions distinct from the first set of functions. The second set of functions may be arithmetic (e.g., multiplication, addition, subtraction, division, etc.) operations.
Examples provide a system level enhancement that integrates both analog and digital technologies and/or different variations of the hardware architecture to further enhance and leverage CiM technology and CnM technologies. Examples include a unified weight storage and computation at leaf node compute units (e.g., CiM elements) to reduce and/or avoid memory bandwidth issues associated with moving weights from a centralized storage location. Examples provide enhancements to benefit small (e.g., low power) inference nodes that may reduce a reliance on traditional von Neumann approach of compute. Examples further provide energy reduction, processing acceleration and/or efficiency relative to existing central processing unit (CPU) memory hierarchies. For example, the examples may provide a significant increase in speed of computations and executions of workloads in performance based on the number formats.
Turning now to
The compute and memory architecture 100 may be categorized into a CiM layer 102, a CnM layer 104 and a CoM element 106. The CiM layer 102, the CnM layer 104 and the CoM element 106 may be connected to each other through different connections and electrical components.
The CiM layer 102 may comprise first-fourth CIM elements 102a-102d. The first-fourth CIM elements 102a-102d may be positioned within a memory array(s) (e.g., SRAM array(s)). The memory array(s) may be extremely dense and execute a simple compute (e.g., multiply-accumulate (MAC)).
The CnM layer 104 includes first and second CnMs 104a, 104b. The first and second CnMs 104a, 104b are positioned proximate to and in the periphery of the memory arrays of the CiM elements 102a-102d. The CnM layer 104 executes high density compute that is slightly more complex compute than first-fourth CiMs 102a-102ds (e.g, MAC, absolute value, rectified linear unit (ReLU) activation functions, etc.).
The CoM 106 element is a more complex compute. The CoM element 106 may be similar to an arithmetic logic unit (ALU) or floating-point unit (FPU). The CoM element 106 may be considered a lower density compute, and is extremely configurable and flexible. In some examples, the CoM element 106 may be a processor (e.g., CPU, host processor, graphics processing unit, vision processing unit, accelerator, etc.).
The CiM layer 102 may be considered the lowest level of the multi-level hierarchy. The CiM layer 102 may include first-fourth CiM elements 102a, 102b, 102c, 102d (e.g., cores and/or tiles, circuitry that includes memory and processing elements). The first-fourth CiM elements 102a-102d may operate in the analog domain to execute analog compute that is built within a memory, for example an SRAM or cache. The CiM layer 102 may include a C-2C ladder to execute analog computations (e.g., first computations such as MAC operations).
The CnM layer 104 includes first and second CnM elements 104a, 104b that execute second computations (e.g., be accumulation, multiplication, absolute value, bias addition, or a ReLU (rectified linear unit) activation function for AI/ML applications, etc.). The CnM layer 104 may be at a level higher than CiM layer 102. Each of the first and second CnM elements 104a, 104b (e.g., cores, circuitry, advance processing elements, etc.) is associated with a group of the first-fourth CiM elements 102a-102d. For example, the first CnM element 104a is directly connected with first and second CiM elements 102a, 102b to receive data from the first and second CiM elements 102a, 102b. The second CnM element 104b is directly connected with the third and fourth CiM elements 102c, 102d to receive data from the third and fourth CiM elements 102c, 102d.
The first and second CnM elements 104a, 104b perform the next level of computation and/or execute when outputs are to be computed across multiple CiM elements of the first-fourth CiMs 102a-102d. For example, the first CiM element 102a and the second CiM element 102b may process different data, and provide respective first and second outputs to the first CnM element 104a. The first CnM element 104a may execute an operation based on the first and second outputs to generate a third output. The third output may be provided to the first CiM element 102a and/or the second CiM element 102b for storage and/or further processing, and/or provided to the CoM element 106 for further processing.
In some examples, the first-fourth CiM elements 102a-102d may also operate as a CiM cache (e.g., L2 cache, discussed below) as a part of a processor architecture. Thus, from the perspective of the first and second CnM elements 104a, 104b, the first-fourth CiM elements 102a-102d may be accessed and operated as an existing memory, and the first and second CnM elements 104a, 104b may read the values from the first-fourth CiM elements 102a-102d treating the first-fourth CiM elements 102a-102d as a memory. The first and second CnM elements 104a, 104b may request data from the first-fourth CiM elements 102a-102d with an instruction set supported by the first-fourth CiM elements 102a-102d.
For example, a read operations at the first and second elements CnM elements 104a, 104b may be executed with the pseudo-code I below. Pseudo-code I illustrates two types of instructions: (1) for fetching the data from the memory, and (2) fetching the data from one of the CiM elements with an operation performed enroute.
Compute operations at the first and second CnM elements 104a, 104b are further executed. The CnM level instructions, in addition to the data fetch instruction described above in Pseudo-code I, may comprise compute instructions. The compute instructions may operate on data that is explicitly read from one or more CiM elements of the first-fourth CiM elements 102a-102d and treats the one or more CiM elements as a traditional memory. Some examples may read out from the first-fourth CiM elements 102a-102d with an implied instructions at the CiM level or a mix of the explicit reading and the implied instruction described above. An example Pseudo-code II is shown below:
In pseudocode II, “a” is a value read from a CiM location of the first-fourth CiM elements 102a-102d. “b” is read from a computation CiM of the first-fourth CiM elements 102a-102d, and when the computation CiM executes an operation before data “b” of the computation reaches a corresponding one of the first and second CnM elements 104a, 104b that will further process data “b.” Finally, the compute instruction, which is executed by one of the first and second CnM elements 104a, 104b operates upon a and b to produce output c. Thus, in some examples the one of the first-fourth CiM elements 102a-102d stores data (e.g., second data), and one of the first and second CnM elements 104a, 104b fetches and/or reads the data from the one of the first-fourth CiM elements 102a-102d to execute an operation on the data.
The CoM element 106 may be a third arithmetic element (e.g., unit) at a level higher than the CnM layer 104 in the hierarchy. The CoM element 106 (e.g., an ALU and/or FPU, etc.) may execute third computations (e.g., complex functions like exponents, trigonometric functions, square roots, etc.). The CoM element 106 may include arithmetic and/or a CPU controlling (e.g., overseeing) a larger set of CnM tiles or instances, such as the CnM layer 104. The CoM element 106 may be denoted as an arithmetic element (unit) and/or a CPU. The CoM element 106 (may include more than one CoM element) or instances may be similar to CPU cores and may also be application specific accelerator units comprising dedicated instructions. The commonality of instruction types carries forward from CiM and CnM instructions. Pseudo-code III below exemplifies the capabilities of the CoM element 106.
In the first instruction (line one), the CoM element 106 is accessing a respective CnM location of the first and second CnM elements 104a, 104b, while in the second instruction (line two), the read instruction is accessing a respective location of the first and second CnM elements 104a, 104b which is in turn calling a CiM instruction over which an operation is performed.
In this example and in the second instruction, the respective data from a respective CiM element of the first-fourth CiM elements 102a-102d is operated upon and is therefore retrieved from the respective CiM (e.g., “read (CiM, location, operation)”), stored into a CnM location of the first and second CnM elements 104a, 104b which then is made available to the CoM element 106 (e.g., “CnM, location, read”). The third instruction (e.g., line) is an operation (e.g., multiply, add, subtract, general matrix multiply, etc.) performed on variables a and b.
Pseudocode IV illustrates a way for the CoM element 106 to directly access a respective CiM of the first-fourth CiM elements 102a-102d:
In pseudo-code IV, the first and the second instructions (first and second lines respectively), reflect that the CoM element 106 is directly accessing a location of a respective CiM of the CiM elements 102a-102d, with the third instruction (e.g., the third line) having an additional step of an operation (e.g., compute) being performed on the accessed data. In pseudo-code IV, CnM instructions are completely bypassed, and the CoM element 106 interacts with the respective CiM element as if the respective CiM element is a memory, or a memory instance that support very basic set of operations. Thus, the first-fourth CiM elements 102a-102d may operate as memories, in addition to executing compute. In some examples, the first-fourth CiM elements 102a-102d have outputs that directly connect to inputs of the first and second CnM elements 104a, 104b.
The first CnM element 104a is connected to a first multiplexer 116 that may selectively provide an output (e.g., output signal) of the first CnM element 104a to the CoM element 106, the first CiM element 102a and the second CiM element 102b. Thus, the first multiplexer 116 provides an output signal of the first CnM element 104a to one of the first and second CiM elements 102a, 102b. A first multi-connection switch 108 is provided to route the output of the first CnM element 104a to the first CiM element 102a and/or the second CiM element 102b.
The second CnM element 104b is connected to a second multiplexer 118 that may selectively provide an output of the second CnM element 104b to the CoM element 106, the third CiM element 102c and the fourth CiM element 102d. A second multi-connection switch 110 is provided to route the output of the second CnM element 104b to the third CiM element 102c or the fourth CiM element 102d.
A third multi-connection switch 112 is provided and selectively provides an input, that may originate from outside the compute and memory architecture 100, to the first and second multi-connection switches 108, 110. The third multi-connection switch 112 may also receive an output of the CoM element 106 via the multiplexer 116. The third multi-connection switch 112 may also route the output received from the CoM element 106 to the first and second multi-connection switches 108, 110.
The CoM element 106 is connected with a third multiplexer 114. The third multiplexer 114 may selectively route an output signal of the CoM element 106 to the third multi-connection switch 112 and an output. The third multi-connection switch 112 may provide the output signal of the CoM element 106 to the first multi-connection switch 108 or the second multi-connection switch 110, and the output signal may then be provided to one or more of the first-fourth CiMs elements 102a-102d. Thus, the third multiplexer 114 provides the output signal of the CoM element 106 to one or more of the first-fourth CiM element 102a-102d.
The compute and memory architecture 100 (e.g., a three-level hierarchical architecture) as illustrated herein forms an inherent three-level nested loop to execute numerous different AI algorithms or applications. Further, the hardware parallelism may also instantiate the first-fourth CiM elements 102a-102d, first and second CnM elements 104a, 104b and CoM element 106 in a tiled manner. While the number of illustrated levels is three, the compute and memory architecture 100 may be designed to have any arbitrary number of levels.
The CiM layer 102 (e.g., in-memory compute cores) may closely relate the processing and storage capabilities of a computer system into a single, memory-centric computing structure. In the CiM layer 102, computations may be performed directly in memory rather than moving data between the memory and a computation unit or processor. The first-fourth CIM elements 102a-102d may accelerate machine learning workloads such as AI and/or deep neural network (DNN) workloads. The mapping of workloads onto hardware plays a role in defining the performance and energy consumption in such applications. CIM elements 102a-102d may also be referred to as IMCCs. Notably, in the compute and memory architecture 100, the CiM layer 102 may perform first computations, the CnM layer 104 may perform a second computations and the CoM element 106 may perform third computations. The first, second and third computations may be distinct from one another, although there may be some overlap between computations executed with the CiM layer 102, the CnM layer 104 and the CoM element 106. Thus, the near proximity and hierarchical arrangement of the CiM layer 102, the CnM layer 104 and the CoM element 106 reduces overhead, latency and bandwidth since the first, second and third computations (which may be for AI and/or DNN workloads) may be executed in close proximity to each other.
It is worthwhile to note that the various components may be implemented in hardware circuitry and/or configurations. For example, the CiM layer 102, the CnM layer 104 and the CoM element 106 may be implemented in hardware implementations that may include configurable logic, fixed-functionality logic, or any combination thereof. Examples of configurable logic include suitably configured programmable logic arrays (PLAs), field programmable gate arrays (FPGAs), complex programmable logic devices (CPLDs), and general purpose microprocessors. Examples of fixed-functionality logic include suitably configured application specific integrated circuits (ASICs), general purpose microprocessor or combinational logic circuits, and sequential logic circuits or any combination thereof. The configurable or fixed-functionality logic can be implemented with complementary metal oxide semiconductor (CMOS) logic circuits, transistor-transistor logic (TTL) logic circuits, or other circuits.
For example, computer program code to carry out operations shown in the method 150 may be written in any combination of one or more programming languages, including an object-oriented programming language such as JAVA, SMALLTALK, C++ or the like and conventional procedural programming languages, such as the “C” programming language or similar programming languages. Additionally, logic instructions might include assembler instructions, instruction set architecture (ISA) instructions, machine instructions, machine dependent instructions, microcode, state-setting data, configuration data for integrated circuitry, state information that personalizes electronic circuitry and/or other structural components that are native to hardware (e.g., host processor, central processing unit/CPU, microcontroller, etc.).
Illustrated processing block 152 executes, with a compute-in-memory (CiM) element, first computations based on first data associated with a workload, and a storage of the first data. Illustrated processing block 154 executes, with a compute-near memory (CnM) element, second computations based on second data associated with the workload. Illustrated processing block 156 executes, with a compute-outside-of-memory (CoM) element, third computations based on third data associated with the workload. Illustrated processing block 158 receives, with a multiplexer, processed data from a first element of the CiM element, the CnM element and the CoM element. Illustrated processing block 160 provides, with the multiplexer, the processed data to a second element of the CiM element, the CnM element and the CoM element. The first computations, the second computations and the third computations are different from each other.
In some examples, the CiM element includes first and second CiM elements that have outputs directly connected to inputs of the CnM element. In some examples, the multiplexer includes first and second multiplexers, and the method 150 further comprises providing, with the first multiplexer, an output signal of the CnM element to the CiM element, and providing, with the second multiplexer, an output signal of the CoM element to the CiM element. In some examples, the method 150 includes storing, with the CiM element, the second data, and fetching, with the CnM element, the second data from the CiM element, the workload is associated with one or more of an artificial intelligence model or a machine learning model.
Turning now to
A second hierarchy 514 is illustrated. In the second hierarchy 514, one large CiM set 516 includes CiMs. The CiMs of the CiM set 516 are connected with a CoM 518, similarly to as shown with respect to compute and memory architecture 100 (
A third hierarchy 520 is illustrated. In the third hierarchy 520, a set 522 includes four CiMs and one CnM that is connected with the four CiMs similarly to as shown with respect to compute and memory architecture 100 (
Turning now to
A majority of computations may occur at the top of the enhanced compute hierarchy 534. The top of the enhanced compute hierarchy 534 corresponds to the CiM core, which may be the equivalent of a CPU accessing the cache in existing CPU architectures as shown at CPU memory hierarchy 532.
There may be a minimal addition to the instruction set (e.g., for a CPU based implementation) to support examples of the hierarchy. Examples may not have to support a highly vectorized CPU core which will not only increase the CoM core but also increases the memory bandwidth to feed the vector-core.
The below list defines different supports for computation with the CoM, CiM and CnM:
The aforementioned CiM prefetch process 370 (
The memory storage architecture 386 shows the example system with the processor 388 (e.g., including 16 KB L1 data and instruction caches, with a 128 KB, CiM and CnM enabled, shared L2 Cache, and a bandwidth limited connection to main memory (32 Gb/s)). The speedups for the examples herein, such as the processor 388, relative to CPU baseline (e.g., RISC-V) to execute operations for various number formats (e.g., INT8, INT16, INT32, FP32, etc.) is significant.
Turning now to
The illustrated computing system 600 also includes an input output (IO) module 620 implemented together with the host processor 608, the graphics processor 606 (e.g., GPU), ROM 622, and AI accelerator 602 on a semiconductor die 604 as a system on chip (SoC). The illustrated IO module 620 communicates with, for example, a display 616 (e.g., touch screen, liquid crystal display/LCD, light emitting diode/LED display), a network controller 628 (e.g., wired and/or wireless), FPGA 624 and mass storage 626 (e.g., hard disk drive/HDD, optical disk, solid state drive/SSD, flash memory). The IO module 620 also communicates with sensors 618 (e.g., video sensors, audio sensors, proximity sensors, heat sensors, etc.).
The SoC 604 may further include processors (not shown) and/or the AI accelerator 602 dedicated to artificial intelligence (AI) and/or neural network (NN) processing. For example, the SoC 604 may include vision processing units (VPUs,) and/or other AI/NN-specific processors such as the AI accelerator 602, etc. In some embodiments, any aspect of the embodiments described herein may be implemented in the processors, such as the graphics processor 606 and/or the host processor 608, and in the accelerators dedicated to AI and/or NN processing such as AI accelerator 602 or other devices such as the FPGA 624. In this particular example, the AI accelerator 602 may include CiMs 602a, CnMs 602b and CoMs 602c that are connected in a hierarchical fashion as described herein to increase throughput, decrease latency and reduce bandwidth.
The graphics processor 606, AI accelerator 602 and/or the host processor 608 may execute instructions 614 retrieved from the system memory 612 (e.g., a dynamic random-access memory) and/or the mass storage 626 to implement aspects as described herein. In some examples, when the instructions 614 are executed, the computing system 600 may implement one or more aspects of the embodiments described herein. For example, the computing system 600 may implement one or more aspects of the examples described herein, for example, the compute and memory architecture 100 (
The processor core 200 is shown including execution logic 250 having a set of execution units 255-1 through 255-N. Some embodiments may include several execution units dedicated to specific functions or sets of functions. Other embodiments may include only one execution unit or one execution unit that can perform a particular function. The illustrated execution logic 250 performs the operations specified by code instructions.
After completion of execution of the operations specified by the code instructions, back end logic 260 retires the instructions of the code 213. In one embodiment, the processor core 200 allows out of order execution but requires in order retirement of instructions. Retirement logic 265 may take a variety of forms as known to those of skill in the art (e.g., re-order buffers or the like). In this manner, the processor core 200 is transformed during execution of the code 213, at least in terms of the output generated by the decoder, the hardware registers and tables utilized by the register renaming logic 225, and any registers (not shown) modified by the execution logic 250.
Although not illustrated in
Referring now to
The system 1000 is illustrated as a point-to-point interconnect system, wherein the first processing element 1070 and the second processing element 1080 are coupled via a point-to-point interconnect 1050. It should be understood any or all the interconnects illustrated in
As shown in
Each processing element 1070, 1080 may include at least one shared cache 1896a, 1896b. The shared cache 1896a, 1896b may store data (e.g., instructions) that are utilized by one or more components of the processor, such as the cores 1074a, 1074b and 1084a, 1084b, respectively. For example, the shared cache 1896a, 1896b may locally cache data stored in a memory 1032, 1034 for faster access by components of the processor. In one or more embodiments, the shared cache 1896a, 1896b may include one or more mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of cache, a last level cache (LLC), and/or combinations thereof.
While shown with only two processing elements 1070, 1080, it is to be understood that the scope of the embodiments is not so limited. In other embodiments, one or more additional processing elements may be present in a given processor. Alternatively, one or more of processing elements 1070, 1080 may be an element other than a processor, such as an accelerator or a field programmable gate array. For example, additional processing element(s) may include additional processors(s) that are the same as a first processor 1070, additional processor(s) that are heterogeneous or asymmetric to processor a first processor 1070, accelerators (such as, e.g., graphics accelerators or digital signal processing (DSP) units), field programmable gate arrays, or any other processing element. There can be a variety of differences between the processing elements 1070, 1080 in terms of a spectrum of metrics of merit including architectural, micro architectural, thermal, power consumption characteristics, and the like. These differences may effectively manifest themselves as asymmetry and heterogeneity amongst the processing elements 1070, 1080. For at least one embodiment, the various processing elements 1070, 1080 may reside in the same die package.
The first processing element 1070 may further include memory controller logic (MC) 1072 and point-to-point (P-P) interfaces 1076 and 1078. Similarly, the second processing element 1080 may include a MC 1082 and P-P interfaces 1086 and 1088. As shown in
The first processing element 1070 and the second processing element 1080 may be coupled to an I/O subsystem 1090 via P-P interconnects 10761086, respectively. As shown in
In turn, I/O subsystem 1090 may be coupled to a first bus 1016 via an interface 1096. In one embodiment, the first bus 1016 may be a Peripheral Component Interconnect (PCI) bus, or a bus such as a PCI Express bus or another third generation I/O interconnect bus, although the scope of the embodiments is not so limited.
As shown in
Note that other embodiments are contemplated. For example, instead of the point-to-point architecture of
Example 1 includes a computing system comprising a compute-in-memory (CiM) element to execute first computations based on first data associated with a workload, and store the first data, a compute-near memory (CnM) element to execute second computations based on second data associated with the workload, a compute-outside-of-memory (CoM) element that executes third computations based on third data associated with the workload, and a multiplexer to receive processed data from a first element of the CiM element, the CnM element and the CoM element, and provide the processed data to a second element of the CiM element, the CnM element and the CoM element.
Example 2 includes the computing system of Example 1, where the first computations, the second computations and the third computations are different from each other.
Example 3 includes the computing system of Example 1, where the CiM element includes first and second CiM elements that have outputs directly connected to inputs of the CnM element.
Example 4 includes the computing system of Example 1, where the multiplexer provides an output signal of the CnM element to the CiM element.
Example 5 includes the computing system of Example 1, where the multiplexer provides an output signal of the CoM element to the CiM element.
Example 6 includes the computing system of Example 1, where the CiM element stores the second data, and the CnM element fetches the second data from the CiM element.
Example 7 includes the computing system of Example 1, where the CiM element, the CnM element and CoM element each include logic coupled to one or more substrates, where the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, and where the workload is associated with one or more of an artificial intelligence model or a machine learning model.
Example 8 includes semiconductor apparatus comprising one or more substrates, and logic coupled to the one or more substrates, where the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, the logic coupled to the one or more substrates to execute, with a compute-in-memory (CiM) element, first computations based on first data associated with a workload, and a storage of the first data, execute, with a compute-near memory (CnM) element, second computations based on second data associated with the workload, execute, with a compute-outside-of-memory (CoM) element, third computations based on third data associated with the workload, receive, with a multiplexer, processed data from a first element of the CiM element, the CnM element and the CoM element, and provide, with the multiplexer, the processed data to a second element of the CiM element, the CnM element and the CoM element.
Example 9 includes the apparatus of Example 8, where the first computations, the second computations and the third computations are different from each other.
Example 10 includes the apparatus of Example 8, where the CiM element includes first and second CiM elements that have outputs directly connected to inputs of the CnM element.
Example 11 includes the apparatus of Example 8, where the logic coupled to the one or more substrates is to provide, with the multiplexer, an output signal of the CnM element to the CiM element.
Example 12 includes the apparatus of Example 8, where the logic coupled to the one or more substrates is to provide, with the multiplexer, an output signal of the CoM element to the CiM element.
Example 13 includes the apparatus of Example 8, where the logic coupled to the one or more substrates is to store, with the CiM element, the second data, and fetch, with the CnM element, the second data from the CiM element.
Example 14 includes the apparatus of Example 8, where the workload is associated with one or more of an artificial intelligence model or a machine learning model.
Example 15 includes the apparatus of Example 8, where the logic coupled to the one or more substrates includes transistor channel regions that are positioned within the one or more substrates.
Example 16 includes method comprising executing, with a compute-in-memory (CiM) element, first computations based on first data associated with a workload, and a storage of the first data, executing, with a compute-near memory (CnM) element, second computations based on second data associated with the workload, executing, with a compute-outside-of-memory (CoM) element, third computations based on third data associated with the workload, receiving, with a multiplexer, processed data from a first element of the CiM element, the CnM element and the CoM element, and providing, with the multiplexer, the processed data to a second element of the CiM element, the CnM element and the CoM element.
Example 17 includes the method of Example 16, where the first computations, the second computations and the third computations are different from each other.
Example 18 includes the method of Example 16, where the CiM element includes first and second CiM elements that have outputs directly connected to inputs of the CnM element.
Example 19 includes the method of Example 16, where the multiplexer includes first and second multiplexers, and the method further comprises providing, with the first multiplexer, an output signal of the CnM element to the CiM element, and providing, with the second multiplexer, an output signal of the CoM element to the CiM element.
Example 20 includes the method of Example 16, further comprising storing, with the CiM element, the second data, and fetching, with the CnM element, the second data from the CiM element, where the CiM element, the CnM element and CoM element each include logic coupled to one or more substrates, where the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, and where the workload is associated with one or more of an artificial intelligence model or a machine learning model.
Example 21 includes an apparatus comprising means for executing, with a compute-in-memory (CiM) element, first computations based on first data associated with a workload, and a storage of the first data, means for executing, with a compute-near memory (CnM) element, second computations based on second data associated with the workload, means for executing, with a compute-outside-of-memory (CoM) element, third computations based on third data associated with the workload, means for receiving, with a multiplexer, processed data from a first element of the CiM element, the CnM element and the CoM element, and means for providing, with the multiplexer, the processed data to a second element of the CiM element, the CnM element and the CoM element.
Example 22 includes the apparatus of Example 21, where the first computations, the second computations and the third computations are different from each other.
Example 23 includes the apparatus of Example 21, where the CiM element includes first and second CiM elements that have outputs directly connected to inputs of the CnM element.
Example 24 includes the apparatus of Example 21, where the multiplexer includes first and second multiplexers, and the apparatus further comprises means for providing, with the first multiplexer, an output signal of the CnM element to the CiM element, and means for providing, with the second multiplexer, an output signal of the CoM element to the CiM element.
Example 23 includes the apparatus of Example 21, further comprising means for storing, with the CiM element, the second data, and means for fetching, with the CnM element, the second data from the CiM element, where the CiM element, the CnM element and CoM element each include logic coupled to one or more substrates, where the logic is implemented at least partly in one or more of configurable logic or fixed-functionality hardware logic, and where the workload is associated with one or more of an artificial intelligence model or a machine learning model.
Embodiments are applicable for use with all types of semiconductor integrated circuit (“IC”) chips. Examples of these IC chips include but are not limited to processors, controllers, chipset components, programmable logic arrays (PLAs), memory chips, network chips, systems on chip (SoCs), SSD/NAND controller ASICs, and the like. In addition, in some of the drawings, signal conductor lines are represented with lines. Some may be different, to indicate more constituent signal paths, have a number label, to indicate a number of constituent signal paths, and/or have arrows at one or more ends, to indicate primary information flow direction. This, however, should not be construed in a limiting manner. Rather, such added detail may be used in connection with one or more exemplary embodiments to facilitate easier understanding of a circuit. Any represented signal lines, whether or not having additional information, may actually comprise one or more signals that may travel in multiple directions and may be implemented with any suitable type of signal scheme, e.g., digital or analog lines implemented with differential pairs, optical fiber lines, and/or single-ended lines.
Example sizes/models/values/ranges may have been given, although embodiments are not limited to the same. As manufacturing techniques (e.g., photolithography) mature over time, it is expected that devices of smaller size could be manufactured. In addition, well known power/ground connections to IC chips and other components may or may not be shown within the figures, for simplicity of illustration and discussion, and so as not to obscure certain aspects of the embodiments. Further, arrangements may be shown in block diagram form in order to avoid obscuring embodiments, and also in view of the fact that specifics with respect to implementation of such block diagram arrangements are highly dependent upon the platform within which the embodiment is to be implemented, i.e., such specifics should be well within purview of one skilled in the art. Where specific details (e.g., circuits) are set forth in order to describe example embodiments, it should be apparent to one skilled in the art that embodiments can be practiced without, or with variation of, these specific details. The description is thus to be regarded as illustrative instead of limiting.
The term “coupled” may be used herein to refer to any type of relationship, direct or indirect, between the components in question, and may apply to electrical, mechanical, fluid, optical, electromagnetic, electromechanical, or other connections. In addition, the terms “first”, “second”, etc. may be used herein only to facilitate discussion, and carry no particular temporal or chronological significance unless otherwise indicated.
As used in this application and in the claims, a list of items joined by the term “one or more of” may mean any combination of the listed terms. For example, the phrases “one or more of A, B or C” may mean A, B, C; A and B; A and C; B and C; or A, B and C.
Those skilled in the art will appreciate from the foregoing description that the broad techniques of the embodiments can be implemented in a variety of forms. Therefore, while the embodiments have been described in connection with particular examples thereof, the true scope of the embodiments should not be so limited since other modifications will become apparent to the skilled practitioner upon a study of the drawings, specification, and following claims.