Three dimensional circuit implementing machine trained network

Information

  • Patent Grant
  • 11790219
  • Patent Number
    11,790,219
  • Date Filed
    Wednesday, October 13, 2021
    2 years ago
  • Date Issued
    Tuesday, October 17, 2023
    7 months ago
  • Inventors
  • Original Assignees
    • Adeia Semiconductor Inc. (San Jose, CA, US)
  • Examiners
    • Diallo; Mamadou L
    Agents
    • Knobbe, Martens, Olson & Bear, LLP
Abstract
Some embodiments provide a three-dimensional (3D) circuit structure that has two or more vertically stacked bonded layers with a machine-trained network on at least one bonded layer. As described above, each bonded layer can be an IC die or an IC wafer in some embodiments with different embodiments encompassing different combinations of wafers and dies for the different bonded layers. The machine-trained network in some embodiments includes several stages of machine-trained processing nodes with routing fabric that supplies the outputs of earlier stage nodes to drive the inputs of later stage nodes. In some embodiments, the machine-trained network is a neural network and the processing nodes are neurons of the neural network. In some embodiments, one or more parameters associated with each processing node (e.g., each neuron) is defined through machine-trained processes that define the values of these parameters in order to allow the machine-trained network (e.g., neural network) to perform particular operations (e.g., face recognition, voice recognition, etc.). For example, in some embodiments, the machine-trained parameters are weight values that are used to aggregate (e.g., to sum) several output values of several earlier stage processing nodes to produce an input value for a later stage processing node.
Description
BACKGROUND

In recent years, there have been great advances in the field of machine learning. Much of these advances have been in machine trained networks (e.g., deep neural networks) and algorithms for training such networks. However, there has not been as much advances in circuits for implementing machine-trained networks. This has been primarily due to an over reliance on implementing machine trained networks in datacenters as opposed to in devices in the real world. Therefore, there is a need in the art for innovative circuits for implementing machine trained networks as well as other types of designs.


BRIEF SUMMARY

Some embodiments of the invention provide a three-dimensional (3D) circuit structure that uses latches to transfer signals between two bonded circuit layers. In some embodiments, this structure includes a first circuit partition on a first bonded layer and a second circuit partition on a second bonded layer. It also includes at least one latch to transfer signals between the first circuit partition on the first bonded layer and the second circuit partition on the second bonded layer. In some embodiments, the latch operates in (1) an open first mode that allows a signal to pass from the first circuit partition to the second circuit partition and (2) a closed second mode that maintains the signal passed through during the prior open first mode.


Unlike a flip-flop that releases in one clock cycle a signal that it stores in a prior clock cycle, a transparent latch does not introduce such a setup time delay in the design. In fact, by allowing the signal to pass through the first circuit partition to the second circuit partition during its open mode, the latch allows the signal to borrow time from a first portion of a clock cycle of the second circuit partition for a second portion of the clock cycle of the second circuit partition. This borrowing of time is referred to below as time borrowing. Also, this time borrowing allows the signal to be available at the destination node in the second circuit partition early so that the second circuit can act on it in the clock cycle that this signal is needed. Compared to flip-flops, latches also reduce the clock load because, while flip-flops require at least two different clock transitions to store and then release a value, transparent latches only require one signal transition to latch a value that they previously passed through.


In some embodiments, the 3D circuit has several such latches at several boundary nodes between different circuit partitions on different bonded layers. Each latch in some embodiments iteratively operates in two sequential modes, an open first mode to let a signal pass from one circuit partition (e.g., a first partition or a second partition) to the other circuit partition (e.g., the second partition or the first partition), and a closed second mode to hold the signal passed during the prior open first mode.


Each latch in some embodiments is associated with one pair of boundary nodes, with one node in the first bonded layer and another node in the second bonded layer. Each pair of nodes is electrically interconnected through a conductive interface, such as a through-silicon via (TSV) or a direct bond interface (DBI) connection (also called hybrid bonding). Each latch in some embodiments is defined on just one of the two bonded layers. In some embodiments, each latch on one bonded layer has its output carried to the other bonded layer by interconnect (e.g., wires) and the conductive interface (e.g., TSV or DBI connection) that connects the latch's associated pair of nodes. In other embodiments, each latch on one bonded layer has its input supplied from the other bonded layer by interconnect and the conductive interface that connects the latch's associated pair of nodes. In still other embodiments, a conductive-interface connection can have two latches on the two bonded layers that it connects, and either latch can be used to facilitate time borrowing as a signal travels between the two circuit partitions on the two bonded layers.


The first and second bonded layers are different in different embodiments. In some embodiments, both bonded layers are integrated circuit (IC) dies. In other embodiments, both bonded layers are IC wafers. In still other embodiments, one of these bonded layers is an IC die, while the other bonded layer is an IC wafer. The first and second bonded layers are vertically stacked on top of each other with no other intervening bonded layers in some embodiments, while these two bonded layers have one or more intervening bonded layers between them in other embodiments.


In some embodiments, one bonded layer fully overlaps the other bonded layer (e.g., the two bonded layers have the same size and are aligned such that they overlap each other's bounding shape), or one bonded layer is smaller than the other bonded layer and is completely subsumed by the footprint of the other bonded layer (i.e., has its bounding shape completely overlapped by the bounding shape of the other bonded layer). In other embodiments, the two bonded layers partially overlap. Also, in some embodiments, the first and second circuit partitions on the first and second bonded layers fully overlap (e.g., the two partition have the same size and are aligned such that they overlap each other's bounding shape), or one partition is smaller than the other partition and is completely subsumed by the footprint of the other partition). In other embodiments, the two circuit partitions partially overlap.


Some embodiments provide a three-dimensional (3D) circuit structure that has two or more vertically stacked bonded layers with a machine-trained network on at least one bonded layer. As described above, each bonded layer can be an IC die or an IC wafer in some embodiments with different embodiments encompassing different combinations of wafers and dies for the different bonded layers. The machine-trained network in some embodiments includes several stages of machine-trained processing nodes with routing fabric that supplies the outputs of earlier stage nodes to drive the inputs of later stage nodes. In some embodiments, the machine-trained network is a neural network and the processing nodes are neurons of the neural network.


In some embodiments, one or more parameters associated with each processing node (e.g., each neuron) is defined through machine-trained processes that define the values of these parameters in order to allow the machine-trained network (e.g., neural network) to perform particular operations (e.g., face recognition, voice recognition, etc.). For example, in some embodiments, the machine-trained parameters are weight values that are used to aggregate (e.g., to sum) several output values of several earlier stage processing nodes to produce an input value for a later stage processing node.


In some embodiments, the machine-trained network includes a first sub-network on one bonded layer and a second sub-network on another bonded layer, with these two sub-networks partially or fully overlapping. Alternatively, or conjunctively, the machine-trained network or sub-network on one bonded layer partially or fully overlaps a memory (e.g., formed by one or more memory arrays) on another bonded layer in some embodiments. This memory in some embodiments is a memory that stores machine-trained parameters for configuring the processing nodes of the machine-trained network or sub-network to perform a particular operation. In other embodiments, this memory is a memory that stores the outputs of the processing nodes (e.g., outputs of earlier stage processing node for later stage processing node).


While being vertically aligned with one memory, the machine-trained network's processing nodes in some embodiments are on the same bonded layer with another memory. For instance, in some embodiments, a first bonded layer in a 3D circuit includes the processing nodes of a machine-trained network and a first memory to store machine-trained parameters for configuring the processing nodes, while a second bonded layer in the 3D circuit includes a second memory to store values produced by the processing nodes. In other embodiments, the first bonded layer in the 3D circuit includes the processing nodes of a machine-trained network and a first memory to store values produced by the processing nodes, while the second bonded layer in the 3D circuit includes a second memory to store machine-trained parameters for configuring the processing nodes.


In still other embodiments, the first bonded layer in the 3D circuit includes the processing nodes of a machine-trained network, while the second bonded layer in the 3D circuit includes a first memory to store values produced by the processing nodes and a second memory to store machine-trained parameters for configuring the processing nodes. In yet other embodiments, the processing nodes on one bonded layer partially or fully overlap two memories on two different layers, with one memory storing machine-trained parameters and the other memory storing processing node output values. The 3D circuit of other embodiments has processing nodes on two or more bonded layers with parameter and/or output memories on the same or different bonded layers. In this document, parameter memory is a memory that stores machine-trained parameters for configuring the machine-trained network (e.g., for configuring the processing nodes of the network) to perform one or more tasks, while output memory is a memory that stores the outputs of the processing nodes of the machine-trained network.


Again, in the above-described embodiments, the bonded layers (two or more) that contain a machine-trained network's processing nodes and memories do not have any intervening bonded layer in some embodiments, while they have one or more intervening bonded layers between or among them in other embodiments. Also, in some embodiments, the machine-trained network's processing nodes and memories on different bonded layers are connected to each other through conductive interfaces, such as TSV or DBI connections.


In some embodiments, the IC die on which a neural network is defined is an ASIC (Application Specific IC) and each neuron in this network is a computational unit that is custom-defined to operate as a neuron. Some embodiments implement a neural network by re-purposing (i.e., reconfiguring) one or more neurons used for earlier neural network stages to implement one or more neurons in later neural network stages. This allows fewer custom-defined neurons to be used to implement the neural network. In such embodiments, the routing fabric between the neurons is at least partially defined by one or more output memories that are used to store the outputs of earlier used neurons to feed the inputs of later staged neurons.


In some embodiments, the output and parameter memories of the neural network have different memory structures (i.e., are different types of memories). For instance, in some embodiments, the output memory has a different type of output interface (e.g., one that allows for random access of the output memory's storage locations) than the parameter memory (e.g., the parameter memory's output interface only provides sequential access of its storage locations). Alternatively, or conjunctively, the parameter memory of the neural network is a read-only memory (ROM), while the output memory of the neural network is a read-write memory in some embodiments. The parameter memory in some embodiments is a sequential ROM that sequentially reads out locations in the ROM to output the parameters that configure the neural network to perform certain machine-trained task(s).


The output memory in some embodiments is a dynamic random access memory (DRAM). In other embodiments, the output memory is an ephemeral RAM (ERAM) that has one or more arrays of storage cells (e.g., capacitive cells) and pass transistors like traditional DRAMs, but does not use read-independent refresh cycles to charge the storage cells unlike traditional DRAMs. This is because the values in the ERAM memory are written and read at such rates that these values do not need to be refreshed with separate refresh cycles. In other words, because intermediate output values of the neural network only need to be used as input into the next layer (or few layers) of the neural network, they are temporary in nature. Thus, the output memory can be implemented with a memory architecture that is compact like a DRAM memory architecture without the need for read-independent refresh cycles.


Some embodiments of the invention provide an integrated circuit (IC) with a defect-tolerant neural network. The neural network has one or more redundant neurons in some embodiments. After the IC is manufactured, a defective neuron in the neural network can be detected through a test procedure and then replaced by a redundant neuron (i.e., the redundant neuron can be assigned the operation of the defective neuron). The routing fabric of the neural network can be reconfigured so that it re-routes signals around the discarded, defective neuron. In some embodiments, the reconfigured routing fabric does not provide any signal to or forward any signal from the discarded, defective neuron, and instead provides signals to and forwards signals from the redundant neuron that takes the defective neuron's position in the neural network.


In the embodiments that implement a neural network by re-purposing (i.e., reconfiguring) one or more individual neurons to implement neurons of multiple stages of the neural network, the IC discards a defective neuron by removing it from the pool of neurons that it configures to perform the operation(s) of neurons in one or more stages of neurons, and assigning this defective neuron's configuration(s) (i.e., its machine-trained parameter set(s)) to a redundant neuron. In some of these embodiments, the IC would re-route around the defective neuron and route to the redundant neuron, by (1) supplying machine-trained parameters and input signals (e.g., previous stage neuron outputs) to the redundant neuron instead of supplying these parameters and signals to the defective neuron, and (2) storing the output(s) of the redundant neuron instead of storing the output(s) of the defective neuron.


One of ordinary skill will understand that while several embodiments of the invention have been described above by reference to machine-trained neural networks with neurons, other embodiments of the invention are implemented on other machine-trained networks with other kinds of machine-trained processing nodes.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings, and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description, and the Drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth in the appended claims. However, for the purpose of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 illustrates an example of a three-dimensional (3D) circuit structure that has several latches at several boundary nodes between the two bonded layers.



FIG. 2 illustrates how the latch of FIG. 1 allows the signal traversing the two dies to time borrow.



FIG. 3 illustrates another example of a 3D circuit structure with a latch being placed on the IC die layer on which a signal terminates.



FIG. 4 illustrates how the latch of FIG. 3 allows the signal traversing the two dies to time borrow.



FIG. 5 illustrates an example of a transparent latch.



FIG. 6 illustrates a 3D circuit structure that has two or more vertically stacked bonded layers with a neural network on at least one bonded layer.



FIG. 7 illustrates an example of a neural network that includes a first sub-network on one bonded layer and a second sub-network on another bonded layer.



FIG. 8 illustrates an example of a neural network that has its neurons aligned with one memory while being on the same bonded layer with another memory.



FIGS. 9 and 10 illustrate different examples of a 3D IC with different components of a neural network on different IC dies.



FIG. 11 illustrates an example of a 3D IC with the neuron on one bonded layer partially or fully overlapping two memories on two different layers.



FIG. 12 illustrates a device that uses a 3D IC of some embodiments.



FIGS. 13 and 14 illustrate examples of the implementation of a neural network by re-purposing (i.e., reconfiguring) one or more individual neurons to implement neurons of multiple stages of the neural network.



FIG. 15 conceptually illustrates a defect-curing process.





DETAILED DESCRIPTION

In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.


Some embodiments of the invention provide a three-dimensional (3D) circuit structure that uses latches to transfer signals between two bonded circuit layers. In some embodiments, this structure includes a first circuit partition on a first bonded layer and a second circuit partition on a second bonded layer. It also includes at least one latch to transfer signals between the first circuit partition on the first bonded layer and the second circuit partition on the second bonded layer. In some embodiments, the latch operates in (1) an open first mode (also called a transparent mode) that allows a signal to pass from the first circuit partition to the second circuit partition and (2) a closed second mode that maintains the signal passed through during the prior open first mode.


Unlike a flip-flop that releases in one clock cycle a signal that it stores in a prior clock cycle, a transparent latch does not introduce such a setup time delay in the design. In fact, by allowing the signal to pass through the first circuit partition to the second circuit partition during its open mode, the latch allows the signal to borrow time from a first portion of a clock cycle of the second circuit partition for a second portion of the clock cycle of the second circuit partition. This borrowing of time is referred to below as time borrowing. Also, this time borrowing allows the signal to be available at the destination node in the second circuit partition early so that the second circuit can act on it in the clock cycle that this signal is needed. Compared to flip-flops, latches also reduce the clock load because, while flip-flops require at least two different clock transitions to store and then release a value, transparent latches only require one signal transition to latch a value that they previously passed through.


The first and second bonded layers are different in different embodiments. In some embodiments, both bonded layers are integrated circuit (IC) dies. In other embodiments, both bonded layers are IC wafers. In still other embodiments, one of these bonded layers is an IC die, while the other bonded layer is an IC wafer. The first and second bonded layers are vertically stacked on top of each other with no other intervening bonded layers in some embodiments, while these two bonded layers have one or more intervening bonded layers between them in other embodiments.


In some embodiments, the 3D circuit has several such latches at several boundary nodes between different circuit partitions on different bonded layers. Each latch in some embodiments is associated with one pair of boundary nodes, with one node in the first bonded layer and another node in the second bonded layer. Each pair of nodes is electrically interconnected through a conductive interface, such as a through-silicon via (TSV) or a direct bond interface (DBI) connection. Each latch in some embodiments is defined on just one of the two bonded layers.



FIG. 1 illustrates an example of a 3D circuit structure that has several latches at several boundary nodes between the two bonded layers. This structure is a 3D IC 100 that is formed by vertically stacking two IC dies 102 and 104. In this example, the two dies 102 and 104 have the same size and are aligned so that their bounding shapes overlap each other. This does not have to be the case, as in some embodiments, the different dies have different sizes and are vertically aligned differently.


In FIG. 1, the 3D circuit structure 100 has several conductive vertical connections 110 that connect circuits on the two IC dies 102 and 104. Examples of such connections include TSVs and DBI connections. DBI provides area-efficient, dense interconnect between two blocks. In two dimensions, the number of interconnects between two blocks is limited to the perimeter facing each other. Fine pitch 3D interface, on the other hand, is only limited by the area of the block overlap. For example, a 1×1 mm block with 100 nm wire pitch and 2 um DBI pitch can fit 10,000 wires through one side in a 2D format versus 250,000 wires spread across the entire block through DBI in a 3D format. DBI is further described in U.S. Pat. Nos. 6,962,835 and 7,485,968, both of which are incorporated herein by reference.


For each of several conductive vertical connections between two adjacent dies, one or both of the dies has a latch that electrically connects (through interconnect) to the conductive-interface connection. In some embodiments, each such latch iteratively operates in two sequential modes, an open first mode (also called a transparent mode) to let a signal pass from one circuit partition on one IC die to a circuit partition on the other IC die, and a closed second mode to hold the signal passed during the prior open first mode.



FIG. 1 illustrates one such latch 132. This latch facilitates signal flow between a first node 130 in a first circuit block 120 on the IC die 104 to a second node 138 in a second circuit block 122 on the IC die 102. This signal flow traverses along a conductive vertical connection 110a (e.g., one DBI connection) between the IC dies 102 and 104. As shown, this conductive vertical connection 110a connects two nodes on the two dies, a node 134 on die 104 and a node 136 on die 102. In this example, the latch 132 on the IC dies 104 has its output carried to the IC die 102 by interconnect (e.g., wires) and the conductive vertical connection 110a.



FIG. 2 illustrates how the latch 132 allows the signal traversing the two dies 102 and 104 to time borrow. Specifically, it shows the latch 132 operating in an open first phase 202. During this phase, the latch is open and transparent. Thus, it allows a signal to pass from the first circuit partition 120 to location 205 in the second circuit partition 122. FIG. 2 also shows the latch 132 operating in a closed second phase 204. During this phase, the latch has closed. When the latch closes, it maintains the signal that passed through it during the prior open first phase. As shown, the signal reaches the node 138 during the second phase.


Because the latch was open during its first phase, the signal was allowed to pass through from the first circuit block 120 to the second circuit block in this phase, which, in turn, allowed the signal to reach its destination 138 in the second circuit block 120 sooner in the closed second phase 204 of the latch 132. In this manner, the latch allows the signal to time borrow (e.g., borrow time from the first phase to speed up the operation of the second circuit block during the second phase).


Instead of placing a latch on the IC die layer from which the signal originates, some embodiments place the latch on the IC die layer on which the signal terminates. FIGS. 3 and 4 illustrate one such example. The example in this figure is similar to the example in FIGS. 1 and 2, except that the latch 132 on the IC die 104 has been replaced with a latch 342 on the IC die 102. This latch is used when a signal traverses from a node 330 on a circuit block 320 on the first die 104 along a vertical connection 110b to node 338 on a circuit block 322 on the second die 102. The vertical connection 110b connects two nodes 334 and 336 on the two dies 105 and 102.


As shown in FIG. 4, the latch 342 operates in an open first phase 402. During this phase, the signal from a node 330 passes from the first circuit partition 320 to location 405 in the second circuit partition 322. When the latch 342 closes (i.e., operates in the closed second phase 404), the latch maintains the signal that passed through it during the prior open first phase to allow the signal to reach the node 338 during the second phase.


In other embodiments, a conductive vertical connection can be associated with two latches on the two bonded layers that it connects, and either latch can be used to facilitate time borrowing as a signal travels between the two circuit partitions on the two bonded layers through the conductive vertical connection. Thus, for the examples illustrated in FIGS. 1-4, the 3D IC has both latches 132 and 142 respectively in circuit partitions 120 and 122, and either of these latches can be selectively enabled to facilitate time borrowing across the two layers.



FIG. 5 illustrates an example of a transparent latch 500. This latch is a D-latch that is formed by an inverter 525, two AND gates 535a and 535b, and two XOR gates 540a and 540b. The inverter receives the input signal at its D terminal 505 and provides its output to an input of AND gate 535a. The input signal is also fed to one of the inputs of the AND gate 535b. The AND gates 535a and 535b also get a latch enable signal E at the latch's enable terminal 510. This enable signal can be a signal generated by another user-design circuit or a signal supplied by a clock or by a storage location driven by the clock or a user-design circuit.


The outputs of the AND gates 535a and 535b are supplied respectively to XOR gates 540a and 540b. These XOR gates are cross-coupled such that their outputs are fed back to the inputs of each other. The outputs of the XOR gates 540a and 540b represent the output of the latch. When only one latch output is needed, the output of XOR gate 540a presented at the Q terminal 515 of the latch serves as the output of the latch 500. As shown by the truth table 550 in FIG. 5, the latch operates in its open/transparent mode (to pass through a signal) when the enable signal is 1, while it operates in a close/latch mode (to maintain the signal previously passed) when the enable signal is 0.


Some embodiments provide a three-dimensional (3D) circuit structure that has two or more vertically stacked bonded layers with a machine-trained network on at least one bonded layer. For instance, each bonded layer can be an IC die or an IC wafer in some embodiments with different embodiments encompassing different combination of wafers and dies for the different bonded layers. Also, the machine-trained network includes an arrangement of processing nodes in some embodiments. In several examples described below, the processing nodes are neurons and the machine-trained network is a neural network. However, one of ordinary skill will realize that other embodiments are implemented with other machine-trained networks that have other kinds of machine-trained processing nodes.



FIG. 6 illustrates an example of a 3D circuit structure with a neural network on at least one of its bonded layers. In this example, the 3D circuit structure is a 3D IC 600 that has two vertically stacked dies 602 and 604, with IC die 604 having a neural network 605. In this example, the IC dies 602 and 604 have the same size and are aligned so that their bounding shapes overlap. This does not have to be the case, as in some embodiments, the different dies have different sizes and are vertically aligned differently. As shown in FIG. 6, the IC dies 602 and 604 have several vertical connections, which in some embodiments are DBI connections. In other embodiments, these connections are other types of direct bonding connections or TSV connections.


As further shown, the neural network 605 in some embodiments includes several stages of neurons 610 with routing fabric that supplies the outputs of earlier stage neurons to drive the inputs of later stage neurons. In some embodiments, one or more parameters associated with each neuron is defined through machine-trained processes that define the values of these parameters in order to allow the neural network to perform particular operations (e.g., face recognition, voice recognition, etc.).



FIG. 6 illustrates an example of such machine-trained parameters for some embodiments. These parameters are the weight values Wi that are used to sum several output values yi of several earlier stage neurons to produce an input value zi for an activation function 625 of a later stage neuron. In this example, the neural network is a feed-forward neural network that has multiple neurons arranged in multiple layers (multiple stages), with each neuron having a linear component 620 and a non-linear component 625, called an activation function. In other embodiments, the neural network is not a feed forward network (e.g., is a recurrent network, etc.).


In all but the last layer of the feed-forward neural network 605, each neuron 610 receives two or more outputs of neurons from earlier neuron layers (earlier neuron stages) and provides its output to one or more neurons in subsequent neuron layers (subsequent neuron stages). The outputs of the neurons in the last layer represent the output of the network 605. In some embodiments, each output dimension of the network 600 is rounded to a quantized value.


The linear component (linear operator) 620 of each interior or output neuron computes a dot product of a vector of weight coefficients and a vector of output values of prior nodes, plus an offset. In other words, an interior or output neuron's linear operator computes a weighted sum of its inputs (which are outputs of the previous stage neurons that the linear operator receives) plus an offset. Similarly, the linear component 620 of each input stage neuron computes a dot product of a vector of weight coefficients and a vector of input values, plus an offset. Each neuron's nonlinear component (nonlinear activation operator) 625 computes a function based on the output of the neuron's linear component 620. This function is commonly referred to as the activation function.


The notation of FIG. 6 can be described as follows. Consider a neural network with L hidden layers (i.e., L layers that are not the input layer or the output layer). Hidden layers are also referred to as intermediate layers. The variable l can be any of the L hidden layers (i.e., l∈{1, . . . , L} index the hidden layers of the network). The variable zi(l+1) represents the output of the linear component of an interior neuron i in layer l+1. As indicated by the following Equation (A), the variable z(l+1) in some embodiments is computed as the dot product of a vector of weight values W(l) and a vector of outputs y(l) from layer l plus an offset bi, typically referred to as a bias.

zi(l+1)=(Wi(l+1)·y(l))+bi(l+1).  (A)


The symbol · is the dot product. The weight coefficients W(l) are weight values that can be adjusted during the network's training in order to configure this network to solve a particular problem. Other embodiments use other formulations than Equation (A) to compute the output zi(l+1) of the linear operator 620.


The output y(l+1) of the nonlinear component 625 of a neuron in layer l+1 is a function of the neuron's linear component, and can be expressed as by Equation (B) below.

yi(l+1)=ƒ(zi(l+1)),  (B)


In this equation, ƒ is the nonlinear activation function for node i. Examples of such activation functions include a sigmoid function (ƒ(x)=1/(1+e−x)), a tanh function, a ReLU (rectified linear unit) function or a leaky ReLU function.


Traditionally, the sigmoid function and the tanh function have been the activation functions of choice. More recently, the ReLU function has been proposed for the activation function in order to make it easier to compute the activation function. See Nair, Vinod and Hinton, Geoffrey E., “Rectified linear units improve restricted Boltzmann machines,” ICML, pp. 807-814, 2010. Even more recently, the leaky ReLU has been proposed in order to simplify the training of the processing nodes by replacing the flat section of the ReLU function with a section that has a slight slope. See He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” arXiv preprint arXiv:1502.01852, 2015. In some embodiments, the activation functions can be other types of functions, like cup functions and periodic functions.


Before the neural network 605 can be used to solve a particular problem (e.g., to perform face recognition), the network in some embodiments is put through a supervised training process that adjusts (i.e., trains) the network's configurable parameters (e.g., the weight coefficients of its linear components). The training process iteratively selects different input value sets with known output value sets. For each selected input value set, the training process in some embodiments forward propagates the input value set through the network's nodes to produce a computed output value set. For a batch of input value sets with known output value sets, the training process back propagates an error value that expresses the error (e.g., the difference) between the output value sets that the network 605 produces for the input value sets in the training batch and the known output value sets of these input value sets. This process of adjusting the configurable parameters of the machine-trained network 605 is referred to as supervised, machine training (or machine learning) of the neurons of the network 605.


In some embodiments, the IC die on which the neural network is defined is an ASIC (Application Specific IC) and each neuron in this network is a computational unit that is custom-defined to operate as a neuron. Some embodiments implement a neural network by re-purposing (i.e., reconfiguring) one or more neurons used for earlier neural network stages to implement one or more neurons in later neural network stages. This allows fewer custom-defined neurons to be needed to implement the neural network. In such embodiments, the routing fabric between the neurons is at least partially defined by one or more output memories that are used to store the outputs of earlier stage neurons to feed the inputs of later stage neurons.


In some embodiments, the neural network includes a first sub-network on one bonded layer and a second sub-network on another bonded layer, with these two sub-networks partially or fully overlapping. FIG. 7 illustrates an example of such an embodiment. It shows a 3D IC 700 with a neural network that is formed by two sub-networks 705 and 707. As shown, the first sub-network 705 is on a first IC die 702 while the second sub-network 707 is on a second IC die 704. The footprints of these two sub-networks 705 and 707 on the two different IC dies 702 and 704 partially or fully overlap.


As further shown in FIG. 7, the components on the IC's dies 702 and 704 are interconnected by several vertical connections 710, which in some embodiments are DBI connections. In other embodiments, these connections are other types of direct bonding connections or TSV connections. As shown, numerous such connections 710 are used to electrically connect nodes on the two sub-networks 705 and 707 on the dies 702 and 704.


In some embodiments, the sub-network 705 are the neurons that are used to implement the odd layer neurons in the multi-layer neuron arrangement (e.g., the multi-layer arrangement shown in FIG. 6), while the sub-network 707 are the neurons that are used to implement the even layer neurons in this arrangement. In other embodiments, each sub-network has multiple layers (stages) of neurons (e.g., two layers of neurons) for implementing multiple adjacent layers of neurons (e.g., sub-network 705 implements even adjacent pairs of neuron layers, while sub-network 707 implements odd adjacent pairs of neuron layers, where even and odd layer pairs sequentially alternate and the first layer pair are the first two neuron layers).


In some embodiments, the vertical connections 710 connect the output of neurons of sub-network 705 on the first IC die to an output memory on the second die that connects to the sub-network 707, so that these values can be stored in the output memory. From this memory, the stored output values are supplied to neurons of the sub-network 707 on the second die so that these neurons can perform computations based on the outputs of the neurons of the sub-network 705 that implement an earlier stage of the neural network's operation.


In some of these embodiments, the outputs of the neurons of the sub-network 707 are then passed through the vertical connections 710 to an output memory on the first die 702 that connects to the sub-network 705. From the output memory on the first die 702, the outputs of the neurons of the sub-network 707 of the second die are supplied to the neurons of the sub-network 705 of the first die once these neurons have been configured to perform the operation of later stage neurons of the neural network. Based on these outputs, the neurons of the sub-network 705 can then perform computations associated with the later stage neurons of the neural network. In this manner, the output values of the neurons of the sub-networks 705 and 707 can continue to pass back and forth between the two IC dies 702 and 704 as the neurons of each sub-network 705 and 707 are reconfigured to perform successive or successive sets (e.g., pairs) of stages of operation of the neural network.


Alternatively, or conjunctively, the neural network or sub-network on one bonded layer partially or fully overlaps a memory (e.g., formed by one or more memory arrays) on another bonded layer in some embodiments. This memory in some embodiments is a parameter memory that stores machine-trained parameters for configuring the neurons of the neural network or sub-network to perform a particular operation. In other embodiments, this memory is an output memory that stores the outputs of the neurons (e.g., outputs of earlier stage neurons for later stage neurons).


While being vertically aligned with one memory, the neural network's neurons in some embodiments are on the same bonded layer with another memory. FIG. 8 illustrates one such example. It illustrates a 3D IC 800 with two IC dies 802 and 804 that have several components of the neural network. These components are several neurons 805 and an output memory 812 on the IC die 804, and a parameter memory 815 on the IC die 802. The output memory 812 stores values produced by the neurons 805, while the parameter memory 815 stores machine-trained parameters for configuring the neurons. As shown, the footprints of arrangement of neurons 805 and the parameter memory 815 fully overlap in some embodiments. These footprints partially overlap in other embodiments, or do not overlap in yet other embodiments.


As further shown in FIG. 8, the components on the IC's dies 802 and 804 are interconnected by several vertical connections 810, which in some embodiments are DBI connections. In other embodiments, these connections are other types of direct bonding connections or TSV connections. As shown, numerous such connections 810 are used to electrically connect nodes of the neurons 805 on the IC die 804 to nodes of the parameter memory 815 on the IC die 802. Through these connections, the neurons receive the machine-trained parameters that configure the neural network to perform a set of operations (e.g., a set of one or more tasks, such as face recognition) for which the neural network has been trained.


The neurons 805 connect to the output memory 812 through one or more interconnect layers (also called metal layers or wiring layers) of the IC die 804. As known in the art, each IC die is manufactured with multiple interconnect layers that interconnect the circuit components (e.g., transistors) defined on the IC die's substrate. Through its connection with the output memory, the outputs of the neurons are stored so that these outputs can later be retrieved as inputs for later stage neurons or for the output of the neural network.



FIG. 9 illustrates another example of a 3D IC with different components of a neural network on different IC dies. This figure illustrates a 3D IC 900 with two IC dies 902 and 904 that have several components of the neural network. These components are several neurons 905 and a parameter memory 915 on the IC die 904, and an output memory 912 on the IC die 902. As shown, the footprints of arrangement of neurons 905 and the output memory 912 partially overlap in some embodiments. In other embodiments, these footprints fully overlap, while in yet other embodiments, they do not overlap.


As further shown in FIG. 9, the components on the IC's dies 902 and 904 are interconnected by several vertical connections 910, which in some embodiments are DBI connections. In other embodiments, these connections are other types of direct bonding connections or TSV connections. As shown, numerous such connections 910 are used to electrically connect nodes of the neurons 905 on the IC die 904 to nodes of the output memory 912 on the IC die 902. Through these connections, the outputs of the neurons are stored so that these outputs can later be retrieved as inputs for later stage neurons or for the output of the neural network. As described above, the 3D IC of some embodiments has output memories and neurons on each of two face-to-face mounted dies (like dies 902 and 904) with the output memory on each die receiving outputs from neurons on another die and providing its content to neurons on its own die.


The neurons 905 connect to the parameter memory 915 through one or more interconnect layers of the IC die 904. Through its connection with the parameter memory, the neurons receive the machine-trained parameters (e.g., weight values for the linear operators of the neurons) that configure the neural network to perform a set of one or more tasks (e.g., face recognition) for which the neural network has been trained. When neurons are placed on both face-to-face mounted dies, some embodiments also place parameter memories on both dies in order to provide machine-trained parameters to neurons on the same IC die or to neurons on the other IC die.



FIG. 10 illustrates another example of a 3D IC with different components of a neural network on different IC dies. This figure illustrates a 3D IC 1000 with two IC dies 1002 and 1004 that have several components of the neural network. These components are several neurons 1005 on the IC die 1004, and an output memory 1012 and a parameter memory 1015 on the IC die 1002. As shown, the footprint of arrangement of neurons 1005 partially overlaps the output memory 1012 and the parameter memory 1015.


As further shown in FIG. 10, the components on the IC's dies 1002 and 1004 are interconnected by several vertical connections 1010, which in some embodiments are DBI connections. In other embodiments, these connections are other types of direct bonding connections or TSV connections. As shown, numerous such connections 1010 are used to electrically connect nodes of the neurons 1005 on the IC die 1004 to either nodes of the output memory 1012 on the IC die 1002, or to nodes of the parameter memory 1015 on the IC die 1002. Through the connections 1010 with the output memory 1012, the outputs of the neurons are stored so that these outputs can later be retrieved as inputs for later stage neurons or for the output of the neural network. Also, through the connections 1010 with the parameter memory 1015, the neurons receive the machine-trained parameters (e.g., weight values for the linear operators of the neurons) that configure the neural network to perform a set of one or more tasks (e.g., face recognition) for which the neural network has been trained.


In some embodiments, the neurons on one bonded layer partially or fully overlap two memories on two different layers, with one memory storing machine-trained parameters and the other memory storing neuron output values. FIG. 11 illustrates one such example. This figure illustrates a 3D IC 1100 with multiple IC dies 1102, 1104, and 1106, each of which has a component of the neural network. These components are several neurons 1105 on the IC die 1104, an output memory 1112 on the IC die 1102, and a parameter memory 1115 on the IC die 1106. As shown, the footprints of the arrangement of neurons 1105 on the IC die 1104 and the output memory 1112 on the IC die 1102 partially or fully overlap. The footprint of the arrangement of neurons 1105 on the IC die 1104 also partially or fully overlaps with the footprint of the parameter memory 1115 on the IC die 1106.


As further shown in FIG. 11, the components on the IC's dies 1102, 1104, and 1106 are interconnected by several vertical connections 1110 and 1111. In this example, IC dies 1102 and 1104 are face-to-face mounted, while the IC dies 1106 and 1104 are face-to-back mounted with the face of the IC die 1106 mounted with the back of the IC die 1104. In some embodiments, the vertical connections 1110 between the dies 1102 and 1104 are direct bonded connections (like DBI connections), while the vertical connections 1111 between dies 1104 and 1106 are TSVs.


As shown, numerous such connections 1110 and 1111 are used to electrically connect nodes of the neurons 1105 on the IC die 1104 to either nodes of the output memory 1112 on the IC die 1102, or to nodes of the parameter memory 1115 on the IC die 1106. Through the connections 1110 with the output memory 1112, the outputs of the neurons are stored so that these outputs can later be retrieved as inputs for later stage neurons or for the output of the neural network. Also, through the connections 1111 with the parameter memory 1115, the neurons receive the machine-trained parameters that configure the neural network to perform a set of one or more tasks (e.g., face recognition) for which the neural network has been trained.


One of ordinary skill will realize that other permutations of 3D circuit structures are also possible. For instance, in some embodiments, the 3D circuit has neurons on two or more bonded layers with parameter and/or output memories on the same or different bonded layers. Also, in the above-described embodiments, the bonded layers (two or more) that contain a neural network's neurons and memories do not have any intervening bonded layer in some embodiments. In other embodiments, however, these bonded layers have one or more intervening bonded layers between or among them.


In some embodiments, the output and parameter memories of the neural network have different memory structures (i.e., are different types of memories). For instance, in some embodiments, the output memory (e.g., memory 812, 912, 1012, or 1112) has a different type of output interface than the parameter memory (e.g., the memory 815, 915, 1015, or 1115). For example, the output memory's output interface allows for random access of this memory's storage locations, while the parameter memory's output interface only supports sequential read access.


Alternatively, or conjunctively, the parameter memory (e.g., the memory 815, 915, 1015, or 1115) of the neural network is a read-only memory (ROM), while the output memory (e.g., memory 812, 912, 1012, or 1112) of the neural network is a read-write memory in some embodiments. The parameter memory in some embodiments is a sequential ROM that sequentially reads out locations in the ROM to output the parameters that configure the neural network to perform certain machine-trained task(s).


The output memory (e.g., memory 812, 912, 1012, or 1112) in some embodiments is a dynamic random access memory (DRAM). In other embodiments, the output memory is an ephemeral RAM (ERAM) that has one or more arrays of storage cells (e.g., capacitive cells) and pass transistors like traditional DRAMs. However, unlike traditional DRAMs, the ERAM output memory does not use read-independent refresh cycles to charge the storage cells. This is because the values in the ERAM output memory are written and read at such rates that these values do not need to be refreshed with separate refresh cycles. In other words, because intermediate output values of the neural network only need to be used as input into the next layer (or few layers) of the neural network, they are temporary in nature. Thus, the output memory can be implemented with a compact, DRAM-like memory architecture without the use of the read-independent refresh cycles of traditional DRAMs.


Using different dies for the output memory 1112 and parameter memory 1115 allows these dies to be manufactured by processes that are optimal for these types of memories. Similarly, using a different die for the neurons of the neural network than for the output memory and/or parameter memory also allows each of these components to be manufactured by processes that are optimal for each of these types of components.



FIG. 12 illustrates a device 1200 that uses a 3D IC 1205, such as 3D IC 100, 600, 700, 800, 900, or 1000. In this example, the 3D IC 1205 is formed by two face-to-face mounted IC dies 1202 and 1204 that have numerous direct bonded connections 1210 between them. In other examples, the 3D IC 1205 includes three or more vertically stacked IC dies, such as the 3D IC 1100. In some embodiments, the 3D IC 1205 implements a neural network that has gone through a machine-learning process to train its configurable components to perform a certain task (e.g., to perform face recognition).


As shown, the 3D IC 1205 includes a case 1250 (sometimes called a cap or epoxy packaging) that encapsulates the dies 1202 and 1204 of this IC in a secure housing 1215. On the back side of the die 1204 one or more interconnect layers 1206 are defined to connect the 3D IC to a ball grid array 1220 that allows this to be mounted on a printed circuit board 1230 of the device 1200. In some embodiments, the 3D IC includes packaging with a substrate on which the die 1204 is mounted (i.e., between the ball grid array and the IC die 1204), while in other embodiments this packaging does not have any such substrate.


Some embodiments of the invention provide an integrated circuit (IC) with a defect-tolerant neural network. The neural network has one or more redundant neurons in some embodiments. After the IC is manufactured, a defective neuron in the neural network can be replaced by a redundant neuron (i.e., the redundant neuron can be assigned the operation of the defective neuron). The routing fabric of the neural network can be reconfigured so that it re-routes signals around the discarded, defective neuron. In some embodiments, the re-configured routing fabric does not provide any signal to or forward any signal from the discarded, defective neuron, and instead provides signals to and forwards signals from the redundant neuron that takes the defective neuron's position in the neural network.


In the embodiments that implement a neural network by re-purposing (i.e., reconfiguring) one or more individual neurons to implement neurons of multiple stages of the neural network, the IC discards a defective neuron by removing it from the pool of neurons that it configures to perform the operation(s) of neurons in one or more stages of neurons, and assigning this defective neuron's configuration(s) (i.e., its machine-trained parameter set(s)) to a redundant neuron. In some of these embodiments, the IC would re-route around the defective neuron and route to the redundant neuron, by (1) supplying machine-trained parameters and input signals (e.g., previous stage neuron outputs) to the redundant neuron instead of supplying these parameters and signals to the defective neuron, and (2) storing the output(s) of the defective neuron instead of storing the output(s) of the defective neuron.



FIGS. 13 and 14 illustrate an example of one such neural network. These figures show a machine-trained circuit 1300 that has two sets of neurons 1305 and 1310 that are re-purposed (reconfigured) to implement a multi-stage neural network 1350. In this example, the neural network 1350 has nine layers. Each of these neuron sets has one redundant neuron 1325 or 1330 to replace any defective neuron in its set, as further described below.


The machine-trained circuit 1300 has two parameter memories 1315a and 1315b that respectively store machine-trained parameters for the neuron sets 1305 and 1310. These machine-trained parameters iteratively configure each neuron set to implement a different stage in the multi-stage network. In the example illustrated in FIG. 13, the parameters in memory 1315a store parameters that sequentially re-configure the neuron set 1305 to implement the odd neuron layers (i.e., the first, third, fifth, seventh and ninth layers) of the neural network, while the memory 1315b stores parameters that sequentially re-configure the neuron set 1310 to implement the even neuron layers (i.e., the second, fourth, sixth and eight layers). The parameters in the memories 1315a and 1315b were generated through machine-learning processes, and configure the neurons in the sets 1305 and 1310 to perform a set of one or more operations (e.g., to perform face recognition or voice recognition).


The machine-trained circuit 1300 also has an output memory 1312. The output of each neuron is stored in the output memory 1312. With the exception of the neurons in the first neuron stage, the inputs of the neurons in the other stages are retrieved from the output memory. Based on their inputs, the neurons compute their outputs, which again are stored in the output memory 1312 for feeding the next stage neurons (when intermediate neurons compute the outputs) or for providing the output of the neural network (when the final stage neurons compute their outputs).


In some embodiments, all the components 1305, 1310, 1312, and 1315 of the circuit 1300 are on one bonded layer (e.g., one IC die or wafer). In other embodiments, different components are on different layers. For instance, the neurons 1305 and 1310 can be on a different IC die than the IC die that includes one of the memories 1312 or 1315, or both memories 1312 and 1315. Alternatively, in some embodiments, the neurons 1305 are on one IC die while the neurons 1310 are on another IC die. In some of these embodiments, the IC die of neurons 1305 or neurons 1310 also include one or both of the parameter and output memories.


In the example illustrated in FIG. 13, none of the neurons are defective. Hence, the redundant neurons 1325 and 1330 are not used to implement any of the neuron stages of the neural network 1350. FIG. 14, however, illustrates an example where one neuron 1405 in the first neuron set 1305 is defective and a neural network 1450 is implemented by using the redundant neuron 1325 of the first neuron set 1305. This figure illustrates a machine-trained circuit 1400 that is identical to the machine-trained circuit 1300, except that the neuron 1405 in the first neuron set 1305 is defective.


To address this defect, a defect-curing process that configures the circuit 1400 removes the defective neuron 1405 from the first neuron set and replaces this defective neuron with the redundant neuron 1325 of this set. The defect-curing process assigns to the redundant neuron the machine-trained parameters that would have been assigned to the defective neuron, in order to allow this neuron to implement one of the neurons in the odd stages of the neural network 1450. This process also changes the storage and retrieval logic of the machine-trained circuit 1400 to ensure that the redundant neuron 1325 receives the desired input from and stores its output in the output memory 1312. FIG. 14 shows the neural network 1450 implemented with the set of neurons 1305R implementing the odd stages of this network. Here, the designation R is indicative that the neuron set 1305 is using its redundant neuron 1325.



FIG. 15 illustrates a defect-curing process 1500 of some embodiments. In some embodiments, this process is performed each time the IC with the neural network is initializing (i.e., is powering up). The process 1500 initially determines (at 1505) whether a setting stored on the IC indicates that one or more neurons are defective. In some embodiments, this setting is stored in a ROM of the IC during a testing phase of the IC after it has been manufactured. This testing phase identifies defective neurons and stores the identity of the defective neuron on the ROM in some embodiments. If only one redundant neuron exists for each neuron set (e.g., 1305 or 1310) of the IC, the testing process in some embodiments discards any IC with more than one defective neuron in each neuron set.


When the setting does not identify any defective neuron, the process 1500 loads (at 1515) the settings that allow the neurons to be configured with a user-design that has been provided in order to configure the neural network to implement a set of operations. After 1515, the process ends. On the other hand, when the setting identifies a defective neuron, the process 1500 removes (at 1520) the defective neuron from the pool of neurons, and replaces (at 1520) this defective neuron with the redundant neuron. The defect-curing process then assigns (at 1525) to the redundant neuron the machine-trained parameters that would have been assigned to the defective neuron to allow this neuron to implement operations of the defective neuron that are needed to implement the neural network. At 1530, the process changes the storage and retrieval logic of the machine-trained circuit to ensure that the redundant neuron receives the desired input from and stores its output in the output memory. Finally, at 1535, the process 1500 directs the neural network to start operating based on the new settings that were specified at 1525 and 1530. After 1335, the process ends.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, one of ordinary skill will understand that while several embodiments of the invention have been described above by reference to machine-trained neural networks with neurons, other embodiments of the invention are implemented on other machine-trained networks with other kinds of machine-trained processing nodes.


The 3D circuits and ICs of some embodiments have been described by reference to several 3D structures with vertically aligned IC dies. However, other embodiments are implemented with a myriad of other 3D structures. For example, in some embodiments, the 3D circuits are formed with multiple smaller dies placed on a larger die or wafer. Also, some embodiments are implemented in a 3D structure that is formed by vertically stacking two sets of vertically stacked multi-die structures. Therefore, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. A circuit comprising: a first bonded layer; anda second bonded layer vertically stacked with the first bonded layer;wherein the first bonded layer comprises at least one processing node adapted for machine-training; andwherein the second bonded layer comprises one or more memories electrically connected to the at least one processing node and configured to receive one or both of input for or output produced by the at least one processing node.
  • 2. The circuit of claim 1, wherein the one or more memories comprise a first memory to store the input for the at least one processing node, and a second memory to store the output produced by the at least one processing node.
  • 3. The circuit of claim 1, wherein the first bonded layer comprises a plurality of processing nodes forming a network adapted for machine-training in which an output of one of the processing nodes serves as an input for another one of the processing nodes.
  • 4. The circuit of claim 3, wherein the network is a neural network and the processing nodes serve as neurons.
  • 5. The circuit of claim 4, wherein the neural network comprises multiple stages of neurons with a routing fabric that drives output of earlier stage neurons to input of later stage neurons.
  • 6. The circuit of claim 1, wherein one of the first and second bonded layers is an integrated circuit (IC) die and the other of the first and second bonded layers is an IC wafer.
  • 7. The circuit of claim 1, wherein the first and second bonded layers are first and second integrated circuit (IC) dies.
  • 8. The circuit of claim 7, wherein the first and second IC dies have a same size.
  • 9. The circuit of claim 7, wherein the first and second IC dies have bounding shapes that overlap each other.
  • 10. The circuit of claim 1, wherein the first and second bonded layers are first and second integrated circuit (IC) wafers.
  • 11. The circuit of claim 1, wherein the one or more memories of the second bonded layer are connected to the at least one processing node through a vertical conductive interface.
  • 12. The circuit of claim 11, wherein the vertical conductive interface comprises through-silicon vias or direct bonded interconnect connections.
  • 13. The circuit of claim 11, wherein the one or more memories of the second layer includes a plurality of memory arrays and at least two of the memory arrays are connected to the at least one processing node through the vertical conductive interface.
  • 14. The circuit of claim 1, wherein each of the first and second bonded layers comprises (i) an integrated circuit substrate on which a plurality of circuit components are defined and (ii) a plurality of wiring layers for connecting the circuit components.
  • 15. The circuit of claim 1, further comprising a third bonded layer comprising a second set of one or more memories.
  • 16. The circuit of claim 15, wherein the one or more memories of the second bonded layer comprise output memory and the second set of one or more memories of the third bonded layer comprises a parameter memory.
  • 17. The circuit of claim 1, wherein the one or more memories are electrically connected to the at least one processing node by hybrid bonding.
  • 18. A circuit comprising: a first layer comprising a network adapted for machine-training; anda second layer vertically stacked with the first layer, the second layer comprising one or more memories;wherein the network of the first layer is electrically coupled to the one or more memories of the second layer, andwherein the one or more memories are configured to receive one or both of input for or output produced by the network.
  • 19. The circuit of claim 18, wherein the network comprises a plurality of processing nodes.
  • 20. The circuit of claim 19, wherein the network is a neural network and the processing nodes comprise neurons.
  • 21. The circuit of claim 18, wherein the one or more memories comprise a first memory to store machine-trained parameters for the network, and a second memory to store values produced by the network.
  • 22. The circuit of claim 18, wherein the one or more memories are electrically coupled to the network by hybrid bonding.
CROSS-REFERENCE TO RELATED APPLICATIONS

This application is a continuation of U.S. application Ser. No. 15/859,551, filed on Dec. 31, 2017, which claims the benefit of the filing date of U.S. Provisional Patent Application No. 62/541,064 filed Aug. 3, 2017, the disclosure of which is incorporated herein by reference.

US Referenced Citations (301)
Number Name Date Kind
5016138 Woodman May 1991 A
5376825 Tukamoto et al. Dec 1994 A
5579207 Hayden et al. Nov 1996 A
5621863 Boulet et al. Apr 1997 A
5673478 Beene et al. Oct 1997 A
5717832 Steimle et al. Feb 1998 A
5740326 Boulet et al. Apr 1998 A
5793115 Zavracky Aug 1998 A
5909587 Tran Jun 1999 A
6320255 Terrill Nov 2001 B1
6421654 Gordon Jul 2002 B1
6707124 Wachtler et al. Mar 2004 B2
6844624 Kiritani Jan 2005 B1
6891447 Song May 2005 B2
6909194 Farnworth et al. Jun 2005 B2
6917219 New Jul 2005 B2
6962835 Tong et al. Nov 2005 B2
7046522 Sung et al. May 2006 B2
7099215 Rotenberg et al. Aug 2006 B1
7124250 Kyung Oct 2006 B2
7202566 Liaw Apr 2007 B2
7485968 Enquist et al. Feb 2009 B2
7638869 Irsigler et al. Dec 2009 B2
7692946 Taufique et al. Apr 2010 B2
7863918 Jenkins Jan 2011 B2
7977962 Hargan et al. Jul 2011 B2
8032711 Black et al. Oct 2011 B2
8042082 Solomon Oct 2011 B2
8059443 McLaren et al. Nov 2011 B2
8110899 Reed et al. Feb 2012 B2
8148814 Furuta et al. Apr 2012 B2
8223523 Jin et al. Jul 2012 B2
8223524 Chung Jul 2012 B2
8228684 Losavio et al. Jul 2012 B2
8233303 Best et al. Jul 2012 B2
8432467 Jaworski et al. Apr 2013 B2
8441831 Ku et al. May 2013 B2
8516409 Coteus et al. Aug 2013 B2
8546955 Wu Oct 2013 B1
8547769 Saraswat et al. Oct 2013 B2
8552569 Lee Oct 2013 B2
8704384 Wu et al. Apr 2014 B2
8736068 Bartley et al. May 2014 B2
8797818 Jeddeloh Aug 2014 B2
8816506 Kawashita et al. Aug 2014 B2
8841002 Tong Sep 2014 B2
8860199 Black et al. Oct 2014 B2
8885380 Kang et al. Nov 2014 B2
8901749 Kim et al. Dec 2014 B2
8907439 Kay et al. Dec 2014 B1
8930647 Smith Jan 2015 B1
8947931 D'Abreu Feb 2015 B1
8987066 Grobelny et al. Mar 2015 B2
9030253 Ngai May 2015 B1
9067272 Sutanto Jun 2015 B2
9076700 Kawashita et al. Jul 2015 B2
9076770 Foster, Sr. et al. Jul 2015 B2
9142262 Ware Sep 2015 B2
9171756 Enquist et al. Oct 2015 B2
9184125 Enquist et al. Nov 2015 B2
9230940 Goodnow et al. Jan 2016 B2
9300298 Cordero Mar 2016 B2
9318418 Kawashita et al. Apr 2016 B2
9331149 Tong et al. May 2016 B2
9385024 Tong et al. Jul 2016 B2
9418964 Chang et al. Aug 2016 B2
9431368 Enquist et al. Aug 2016 B2
9432298 Smith Aug 2016 B1
9478496 Lin Oct 2016 B1
9484326 Keeth et al. Nov 2016 B2
9497854 Giuliano Nov 2016 B2
9501603 Barowski et al. Nov 2016 B2
9508607 Chua-Eoan Nov 2016 B2
9640233 Sohn May 2017 B2
9645603 Chall et al. May 2017 B1
9647187 Yap et al. May 2017 B1
9691739 Kawashita et al. Jun 2017 B2
9726691 Garibay Aug 2017 B2
9741620 Uzoh et al. Aug 2017 B2
9746517 Whetsel Aug 2017 B2
9747959 Seo Aug 2017 B2
9852988 Enquist et al. Dec 2017 B2
9871014 Haba Jan 2018 B2
9899442 Katkar Feb 2018 B2
9915978 Dabby et al. Mar 2018 B2
9934832 Shibata et al. Apr 2018 B2
9953941 Enquist Apr 2018 B2
10002844 Wang et al. Jun 2018 B1
10121743 Kamal Nov 2018 B2
10180692 Kouchi Jan 2019 B2
10204893 Uzoh et al. Feb 2019 B2
10241150 Woo et al. Mar 2019 B2
10255969 Eom et al. Apr 2019 B2
10262911 Gong Apr 2019 B1
10269586 Chou et al. Apr 2019 B2
10276909 Huang et al. Apr 2019 B2
10289604 Sankaralingam et al. May 2019 B2
10373657 Kondo et al. Aug 2019 B2
10418277 Cheng et al. Sep 2019 B2
10446207 Kim et al. Oct 2019 B2
10446456 Shen et al. Oct 2019 B2
10446487 Huang et al. Oct 2019 B2
10446532 Uzoh et al. Oct 2019 B2
10446601 Otake et al. Oct 2019 B2
10508030 Katkar et al. Dec 2019 B2
10522352 Delacruz Dec 2019 B2
10522499 Enquist et al. Dec 2019 B2
10580735 Mohammed et al. Mar 2020 B2
10580757 Nequist et al. Mar 2020 B2
10580817 Otake et al. Mar 2020 B2
10586786 DeLaCruz et al. Mar 2020 B2
10593667 DeLaCruz et al. Mar 2020 B2
10600691 DeLaCruz et al. Mar 2020 B2
10600735 DeLaCruz et al. Mar 2020 B2
10600780 DeLaCruz et al. Mar 2020 B2
10607136 Teig et al. Mar 2020 B2
10672663 DeLaCruz et al. Jun 2020 B2
10672743 Teig et al. Jun 2020 B2
10672744 Teig et al. Jun 2020 B2
10672745 Teig et al. Jun 2020 B2
10707087 Uzoh et al. Jul 2020 B2
10719762 Teig Jul 2020 B2
10762420 Teig et al. Sep 2020 B2
10784191 Huang et al. Sep 2020 B2
10790262 Uzoh et al. Sep 2020 B2
10832912 Delacruz et al. Nov 2020 B2
10840135 Uzoh Nov 2020 B2
10840205 Fountain, Jr. et al. Nov 2020 B2
10854578 Morein Dec 2020 B2
10879212 Uzoh et al. Dec 2020 B2
10886177 DeLaCruz et al. Jan 2021 B2
10892246 Uzoh Jan 2021 B2
10923408 Huang et al. Feb 2021 B2
10923413 DeLaCruz Feb 2021 B2
10950547 Mohammed et al. Mar 2021 B2
10964664 Mandalapu et al. Mar 2021 B2
10985133 Uzoh Apr 2021 B2
10991804 DeLaCruz et al. Apr 2021 B2
10998292 Lee et al. May 2021 B2
11004757 Katkar et al. May 2021 B2
11011494 Gao et al. May 2021 B2
11011503 Wang et al. May 2021 B2
11031285 Katkar et al. Jun 2021 B2
11037919 Uzoh et al. Jun 2021 B2
11056348 Theil Jul 2021 B2
11069734 Katkar Jul 2021 B2
11088099 Katkar et al. Aug 2021 B2
11127738 DeLaCruz et al. Sep 2021 B2
11158573 Uzoh et al. Oct 2021 B2
11158606 Gao et al. Oct 2021 B2
11169326 Huang et al. Nov 2021 B2
11171117 Gao et al. Nov 2021 B2
11176450 Teig et al. Nov 2021 B2
11195748 Uzoh et al. Dec 2021 B2
11205625 DeLaCruz et al. Dec 2021 B2
11244920 Uzoh Feb 2022 B2
11256004 Haba et al. Feb 2022 B2
11264357 DeLaCruz et al. Mar 2022 B1
11276676 Enquist et al. Mar 2022 B2
11296044 Gao et al. Apr 2022 B2
11296053 Uzoh et al. Apr 2022 B2
11329034 Tao et al. May 2022 B2
11348898 DeLaCruz et al. May 2022 B2
11355404 Gao et al. Jun 2022 B2
11355443 Huang et al. Jun 2022 B2
11367652 Uzoh et al. Jun 2022 B2
11373963 DeLaCruz et al. Jun 2022 B2
11380597 Katkar et al. Jul 2022 B2
11385278 DeLaCruz et al. Jul 2022 B2
11387202 Haba et al. Jul 2022 B2
11387214 Wang et al. Jul 2022 B2
11393779 Gao et al. Jul 2022 B2
11462419 Haba Oct 2022 B2
11476213 Haba et al. Oct 2022 B2
11515291 DeLaCruz et al. Nov 2022 B2
20010017418 Noguchi et al. Aug 2001 A1
20020008309 Akiyama Jan 2002 A1
20030102495 Huppenthal et al. Jun 2003 A1
20050127490 Black et al. Jun 2005 A1
20060036559 Nugent Feb 2006 A1
20060087013 Hsieh Apr 2006 A1
20070220207 Black et al. Sep 2007 A1
20090070727 Solomon Mar 2009 A1
20100140750 Toms Jun 2010 A1
20100261159 Tess et al. Oct 2010 A1
20110026293 Riho Feb 2011 A1
20110131391 Barowski et al. Jun 2011 A1
20120092062 Lee et al. Apr 2012 A1
20120119357 Byeon et al. May 2012 A1
20120136913 Duong May 2012 A1
20120170345 Choi et al. Jul 2012 A1
20120201068 Ware Aug 2012 A1
20120242346 Wang Sep 2012 A1
20120262196 Yokou Oct 2012 A1
20120286431 Foster, Sr. et al. Nov 2012 A1
20120313263 Barth Dec 2012 A1
20130021866 Lee Jan 2013 A1
20130032950 Ware et al. Feb 2013 A1
20130051116 En et al. Feb 2013 A1
20130144542 Ernst Jun 2013 A1
20130187292 Semmelmeyer Jul 2013 A1
20130207268 Chapelon Aug 2013 A1
20130242500 Lin et al. Sep 2013 A1
20130275823 Cordero et al. Oct 2013 A1
20130321074 Ko et al. Dec 2013 A1
20140022002 Chua-Eoan et al. Jan 2014 A1
20140285253 Jeon et al. Sep 2014 A1
20140323046 Asai Oct 2014 A1
20140369148 Matsui et al. Dec 2014 A1
20150199997 Ito et al. Jul 2015 A1
20150228584 Huang et al. Aug 2015 A1
20150262902 Shen et al. Sep 2015 A1
20160111386 England et al. Apr 2016 A1
20160225431 Best et al. Aug 2016 A1
20160233134 Lim et al. Aug 2016 A1
20160329271 Katti Nov 2016 A1
20160329312 O'Mullan et al. Nov 2016 A1
20160379115 Burger Dec 2016 A1
20170092615 Oyamada Mar 2017 A1
20170092616 Su et al. Mar 2017 A1
20170148737 Fasano et al. May 2017 A1
20170194309 Evans et al. Jul 2017 A1
20170213787 Alfano et al. Jul 2017 A1
20170227605 Kim et al. Aug 2017 A1
20170278213 Eckert Sep 2017 A1
20170278789 Chuang Sep 2017 A1
20170285584 Nakagawa et al. Oct 2017 A1
20170301625 Mahajan et al. Oct 2017 A1
20180017614 Leedy Jan 2018 A1
20180068218 Yoo Mar 2018 A1
20180174023 Imam Jun 2018 A1
20180174041 Imam Jun 2018 A1
20180190580 Haba et al. Jul 2018 A1
20180190583 DeLaCruz et al. Jul 2018 A1
20180286800 Kamal et al. Oct 2018 A1
20180330992 Delacruz et al. Nov 2018 A1
20180330993 Delacruz et al. Nov 2018 A1
20180331037 Mohammed et al. Nov 2018 A1
20180331038 Delacruz et al. Nov 2018 A1
20180331072 Nequist et al. Nov 2018 A1
20180331094 Delacruz et al. Nov 2018 A1
20180331095 Delacruz et al. Nov 2018 A1
20180350775 DeLaCruz Dec 2018 A1
20180373975 Yu et al. Dec 2018 A1
20180374788 Nakagawa et al. Dec 2018 A1
20190006322 Park Jan 2019 A1
20190042377 Teig et al. Feb 2019 A1
20190042912 Teig et al. Feb 2019 A1
20190042929 Teig et al. Feb 2019 A1
20190043832 Teig et al. Feb 2019 A1
20190051641 Lee et al. Feb 2019 A1
20190109057 Hargan et al. Apr 2019 A1
20190123022 Teig et al. Apr 2019 A1
20190123023 Teig et al. Apr 2019 A1
20190123024 Teig Apr 2019 A1
20190156215 Matveev May 2019 A1
20190180183 Diamant Jun 2019 A1
20190214991 Ngai Jul 2019 A1
20190244933 Or-Bach Aug 2019 A1
20190385935 Gao et al. Dec 2019 A1
20200013699 Liu Jan 2020 A1
20200013765 Fountain, Jr. et al. Jan 2020 A1
20200035641 Fountain, Jr. et al. Jan 2020 A1
20200143866 Biswas et al. May 2020 A1
20200194052 Shaeffer et al. Jun 2020 A1
20200203318 Nequist et al. Jun 2020 A1
20200219771 DeLaCruz et al. Jul 2020 A1
20200227389 Teig et al. Jul 2020 A1
20200273798 Mohammed et al. Aug 2020 A1
20200293872 Teig et al. Sep 2020 A1
20200294858 DeLaCruz et al. Sep 2020 A1
20200294908 Haba et al. Sep 2020 A1
20200328162 Haba et al. Oct 2020 A1
20200372345 Li Nov 2020 A1
20200395321 Katkar et al. Dec 2020 A1
20210098412 Haba et al. Apr 2021 A1
20210118864 DeLaCruz et al. Apr 2021 A1
20210143125 DeLaCruz et al. May 2021 A1
20210181510 Katkar et al. Jun 2021 A1
20210193603 DeLaCruz et al. Jun 2021 A1
20210193624 DeLaCruz et al. Jun 2021 A1
20210193625 Katkar et al. Jun 2021 A1
20210242152 Fountain, Jr. et al. Aug 2021 A1
20210296282 Gao et al. Sep 2021 A1
20210305202 Uzoh et al. Sep 2021 A1
20210366820 Uzoh Nov 2021 A1
20210407941 Haba Dec 2021 A1
20220077063 Haba Mar 2022 A1
20220077087 Haba Mar 2022 A1
20220139867 Uzoh May 2022 A1
20220139869 Gao et al. May 2022 A1
20220208650 Gao et al. Jun 2022 A1
20220208702 Uzoh Jun 2022 A1
20220208723 Katkar et al. Jun 2022 A1
20220246497 Fountain, Jr. et al. Aug 2022 A1
20220285303 Mirkarimi et al. Sep 2022 A1
20220319901 Suwito et al. Oct 2022 A1
20220320035 Uzoh et al. Oct 2022 A1
20220320036 Gao et al. Oct 2022 A1
20230005850 Fountain, Jr. Jan 2023 A1
20230019869 Mirkarimi et al. Jan 2023 A1
Foreign Referenced Citations (6)
Number Date Country
3698401 Aug 2020 EP
3698402 Aug 2020 EP
20150137970 Dec 2015 KR
2017138121 Aug 2017 WO
2019079625 Apr 2019 WO
2019079631 Apr 2019 WO
Non-Patent Literature Citations (23)
Entry
Author Unknown, “Fact Sheet: New Intel Architectures and Technologies Target Expanded Market Opportunities,” Dec. 12, 2018, 9 pages, Intel Corporation, Santa Clara, California.
Author Unknown, “Vector Supercomputer SX Series: SX-Aurora TSUBASA,” Oct. 2017, 2 pages, NEC Corporation.
Bansal, Samta, “3D-IC is Now Real: Wide-IO is Driving 3D-IC TSV,” Cadence Flash Memory Summit, Aug. 2012, 14 pages, Cadence Design Systems, Inc.
Black, Bryan, “Die Stacking is Happening!,” Dec. 9, 2013, 53 pages, Advanced Micro Devices, Inc., Santa Clara, California.
Black, Bryan, et al., “3D Processing Technology and its Impact on iA32 Microprocessors,” Proceedings of 2004 IEEE International Conference on Computer Design: VLSI in Computers and Processors, Oct. 11-13, 2004, 3 pages, IEEE, San Jose, California.
Black, Bryan, et al., “Die Stacking (3D) Microarchitecture,” Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, Dec. 9-13, 2006, 11 pages, IEEE, Orlando, Florida.
Hajkazemi, Mohammad Hossein, et al., “Wide I/O or LPDDR? Exploration and Analysis of Performance, Power and Temperature Trade-offs of Emerging DRAM Technologies in Embedded MPSoCs,” Proceedings of 33rd IEEE International Conference on ComputerDesign (ICCD), Oct. 18-21, 2015, 8 pages, IEEE, New York City, New York.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian, “Delving deep into rectifiers: Surpassing human-level performance on imagenet classification,” arXiv preprint arXiv: 1502.01852, 2015, pp. 1026-1034.
Invitation to Pay Additional Fees for Commonly Owned International Patent Application PCT/US2018/056559, dated Feb. 7, 2019, 15 pages, International Searching Authority (EPO).
Kim, Jung-Sik, et al., “A 1.2 V 12.8 GB/s 2 Gb Mobile Wide-I/O DRAM With 4x128 I/Os Using TSV Based Stacking,” IEEE Journal of Solid-State Circuits, Jan. 2012, 10 pages, vol. 47, No. 1, IEEE.
Loh, Gabriel H., et al., “Processor Design in 3D Die-Stacking Technologies,” IEEE Micro, May/Jun. 2007, 18 pages, vol. 27, Issue 3, IEEE Computer Society.
Nair, Vinod and Hinton, Geoffrey E., “Rectified linear units improve restricted Boltzmann machines,” ICML, pp. 807-814, 2010.
Nakamoto, Mark, et al., “Simulation Methodology and Flow Integration for 3D IC Stress Management,” 2010 IEEE Custom Integrated Circuits Conference, Sep. 19-22, 2010, 4 pages, IEEE, San Jose, CA, USA.
Non-Published Commonly Owned Related U.S. Appl. No. 16/891,027, filed Jun. 2, 2020, 4 pages, Xcelsis Corporation.
Tran, Kevin, et al., “Start Your HBM/2.5D Design Today,” High-Bandwidth Memory White Paper, Mar. 29, 2016, 6 pages, Amkor Technology, Inc., San Jose, CA, USA.
Wu, Xiaoxia, et al., “Electrical Characterization for Intertier Connections and Timing Analysis for 3-D ICs,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, Dec. 6, 2010, 5 pages, IEEE.
International Search Report and Written Opinion of Commonly Owned International Patent Application PCT/US2018/056559, dated Mar. 29, 2019, 17 pages, International Searching Authority (European Patent Office).
International Search Report and Written Opinion of Commonly Owned International Patent Application PCT/US2018/056565, dated Apr. 2, 2019, 17 pages, International Searching Authority (European Patent Office).
Invitation to Pay Additional Fees for Commonly Owned International Patent Application PCT/US2018/056565, dated Feb. 12, 2019, 13 pages, International Searching Authority (EPO).
Non-Published Commonly Owned Related U.S. Appl. No. 16/827,467, filed Mar. 23, 2020, 74 pages, Xcelsis Corporation.
Non-Published Commonly Owned Related U.S. Appl. No. 16/806,854, filed Mar. 2, 2020, 75 pages, Xcelsis Corporation.
Non-Published Commonly Owned Related U.S. Appl. No. 16/889,698, filed Jun. 1, 2020, 74 pages, Xcelsis Corporation.
Non-Published Commonly Owned U.S. Appl. No. 16/806,934, filed Mar. 2, 2020, 74 pages, Xcelsis Corporation.
Related Publications (1)
Number Date Country
20220108161 A1 Apr 2022 US
Provisional Applications (1)
Number Date Country
62541064 Aug 2017 US
Continuations (1)
Number Date Country
Parent 15859551 Dec 2017 US
Child 17500374 US