The present invention relates to in-memory computing devices, and more particularly, to in-memory computing devices including multiple types of memory cells.
A neural network is an information processing paradigm that is inspired by the way biological nervous systems process information. With the availability of large training datasets and sophisticated learning algorithms, neural networks have facilitated major advances in numerous domains such as computer vision, speech recognition, and natural language processing.
The basic unit of computation in a neural network is a neuron. A neuron receives inputs from other neurons, or from an external source and computes an output.
In the sum-of-products expression above, each product term is a product of a variable input xi and a weight wi. The weight wi can vary among the terms, corresponding for example to coefficients of the variable inputs xi. Similarly, outputs from the other neurons in the hidden layer can also be calculated. The outputs of the two neurons in the hidden layer 110 act as inputs to the output neuron in the output layer 104.
Neural networks are used to learn patterns that best represent a large set of data. The hidden layers closer to the input layer learn high level generic patterns, and the hidden layers closer to the output layer learn more data-specific patterns. Training is a phase in which a neural network learns from training data. During training, the connections in the synaptic layers are assigned weights based on the results of the training session. Inference is a stage in which a trained neural network is used to infer/predict input data and produce output data based on the prediction. An inference accuracy of a neural network is the rate at which it correctly predicts or infers input data.
In-memory computing is an approach in which memory cells, organized in an in-memory computing device, can be used for both data processing and memory storage. A neural network can be implemented in an in-memory computing device. The weights for the sum-of-products function can be stored in memory cells of the in-memory computing device. The sum-of-products function can be realized as a circuit operation in the in-memory computing device in which the electrical characteristics of the memory cells of the array effectuate the function.
Device variability in the memory cells, inaccurate read and write operations, and other non-ideal device characteristics can lead to fluctuations in the weights stored in the memory cells of the in-memory computing devices. The fluctuations in the stored weights, especially in in-memory computing devices using reprogrammable non-volatile memory cells, such as floating gate memories, phase change memories, resistive RAMs, etc., can lead to less accurate output data by the neural networks implemented in the in-memory computing devices. It is desirable to provide an in-memory computing device with higher inference accuracy.
An integrated circuit is described herein that comprises an in-memory computing device implementing a neural network. The in-memory computing device has a plurality of synaptic layers, the plurality of synaptic layers including first and second types of synaptic layers. The first type of synaptic layer comprises a first type of memory cells while the second type of synaptic layer comprises a second type of memory cells. Memory cells of the first type are configured for more accurate data storage, and/or more stable read/write operations than memory cells of the second type. Weights stored in memory cells of the first type may have a lower tendency to fluctuate from precise values than memory cells of the second type. Memory cells of the first type may differ from memory cells of the second type in terms of the structures of the memory cells, sizes of the memory cells and/or algorithms used to perform read/write operations in the memory cells.
In some embodiments, the weights stored in the first and second types of memory cells may be the resistance of the memory cells, for example, memory cells such as resistive RAM, magnetic RAM, ferroelectric RAM, and charge trapping memories. In some embodiments, the weights stored may be the information stored in the memory cells, for example, a bit “0” and “1” in static RAM and dynamic RAM. In some embodiments, a digital representation of a weight may be stored in memory cells in a sequence in a row of memory cells where each memory cell in the sequence represents a binary digit in the digital representation of the weight.
The first type of synaptic layer and the second type of synaptic layer may comprise an array of memory cells having an M number of rows and an N number of columns. Each memory cell in the array of memory cells stores a weight factor Wmn. Columns of memory cells in the array are coupled to a set of first access lines, and rows of memory cells are coupled to a set of second access lines. The array of memory cells may further comprise decoder and driver circuitry electrically coupled to the set of first access lines and the set of second access lines, and sensing circuitry, such as sense amplifiers, electrically coupled to the set of second access lines.
In some embodiments, signals on the first access lines in the set of first access lines represent inputs xm to the respective rows. Output current sensed at a particular second access line in the set of second access lines by the sensing circuitry can represent a sum-of-products of the inputs xm by respective weight factors Wmn in the column of memory cells coupled to the particular second access line. In some embodiments, outputs sensed in an array of memory cells in a first or second type of synaptic layer are input signals to an array of memory cells in another synaptic layer.
Some embodiments of an in-memory computing device may further comprise a multiplier and accumulator unit. The multiplier and accumulator unit may receive weight factors stored in memory cells in layers of the first and second types of synaptic layers and inputs to estimate a sum-of-products of the inputs and the weight factors.
Some embodiments of an in-memory computing device may further include a plurality of a third type of synaptic layer comprising a third type of memory cells. The third type of memory cell is different than the first type of memory cells and the second type of memory cells.
Methods for manufacturing an in-memory computing device as described herein are also provided.
Other aspects and advantages of the present invention can be seen on review of the drawings, the detailed description, and the claims, which follow.
A detailed description of embodiments of the present technology is provided with reference to the
A detailed description of embodiments of the present invention is provided with reference to the
Referring to
The layers of the first type of synaptic layer in the memory system 200 comprise a first type of memory cells that can be used to store weights for synaptic layers in a neural network closer to the input layer. The layers of the second type of synaptic layer in the memory system 200 comprise a second type of memory cells that can be used to store weights for synaptic layers in a neural network closer to the output layer.
The overall inference accuracy of the neural network can be increased by using a memory cells of the first type of memory cell in layers of the first type of synaptic layer that may store more accurate weight, or be less prone to weight fluctuations, when compared to the memory cells of the second type of memory cell. Memory cells of the first type of memory cell are configured for more accurate data storage, and/or more stable read/write operations than memory cells of the second type of memory cell. The first type of memory cell may differ from the second type of memory cell in terms of the types of memory cells included in the cells, structures of the memory cells, or sizes of the memory cells. Memory cells of the first type may also be less prone to device variability and operation failures, such as failed read or write operations.
The memory cells of the first type of memory cell may be volatile memory cells (e.g., SRAM and DRAM) or non-volatile memory cells (e.g., mask ROM, fuse ROM, and resistive RAM). The memory cells of the first type of memory cell may be read-only memory cells (e.g., mask ROM, fuse ROM) or reprogrammable memory cells (e.g., SRAM, DRAM, and resistive RAM). In some embodiments, the weights stored in the memory cells of the first type may be the information stored in the memory cells, for example, SRAM and DRAM storing bits “0” and “1”. The accuracy of the weights stored in an SRAM or DRAM cell can be handled by sense amplifiers attached to the cell. In some embodiments, the weights stored in the first type of memory cell may be sensed based on the resistance of the memory cells, for example, memory cells such as resistive RAM, floating gate MOSFETs, dielectric charge trapping devices (e.g., SONOS, BE-SONOS, TANOS, MA BE-SONOS) and phase change memories.
Memory cells of the second type may be more prone to weight fluctuations, device variability and operation failures when compared to memory cells of the first type. Memory cells of the second type may be non-volatile memory cells, such as resistive RAM, floating gate MOSFETs, dielectric charge trapping devices (e.g., SONOS, BE-SONOS, TANOS, MA BE-SONOS), phase change memories, ferroelectric RAMS, and magnetic RAMs. Memory cells of the second type may be reprogrammable memory cells so that weights stored in the second type of memory cell can be changed while training the neural network or fine-tuning the neural network for higher inference accuracy.
In some embodiments, the weights stored in memory cells of the second type may be sensed based on the resistances of the memory cells, for example, memory cells such as resistive RAM, floating gate MOSFETs, dielectric charge trapping devices (e.g., SONOS, BE-SONOS, TANOS, MA BE-SONOS) and phase change memories.
In some embodiments of the first type of memory cell, the weights stored may be the two or more bits of information stored in the memory cells, for example, bits “0” and “1” in SRAMs, DRAMs, and ROMs.
In some embodiments, the memory cells of the first type and the memory cells of the second type may include different memory cells, i.e., the structure of the memory cells of the first type being different than the structure of the memory cells of the second type. The memory cells of the first type may include volatile memory cells (e.g., SRAM and DRAM), and the memory cells of the second type may include non-volatile memory cells (e.g., resistive RAM). In some embodiments, memory cells of the first type may include read-only memory cells (e.g., fuse ROM), and memory cells of the second type may include reprogrammable memory cells (e.g., resistive RAM, phase chase memories, charge trapping memories).
In some embodiments of the in-memory computing, memory cells of the first type and memory cells of the second type may include the same type of memories (e.g., resistive RAMs), and the size of the first type of memory cell may be bigger than the second type of memory cell. The larger memory cells of the first type will be less noisy than the memory cells of the second type, resulting in less weight fluctuation in the memory cells of the first type. In some embodiments, the fabrication process of the first type of memory cell may be different than the fabrication process of the second type of memory cell, resulting in the memory cells of the first type having less device variability than the memory cells of the second type. In some embodiments, the memory material for data storage in memory cells of the first type may be different than the memory material used in memory cells of the second type. For example, memory cells of the first type may be resistive RAMs with HfOx as the memory material and memory cells of the second type may be resistive RAMs with CoOx as the memory material.
In some embodiments, data may be read or written in the first type of memory cell with a different algorithm than the one used to read or write data in the second type of memory cell. For example, when multiple bits storing charge trapping memories are used as the first and second types of memory cell, incremental-step-pulse programming (ISPP) can be used to tighten threshold voltage distribution and resistance spreads for memory cells of the first type, and single pulse programming can be used for memory cells of the second type.
In some embodiments, the memory system 200 may include a plurality of layers of a third type of synaptic layer. Layers of the third type of synaptic layer may comprise memory cells of a third type that can be used to store weights for middle synaptic layers in a neural network. Weights stored in memory cells of the third type may be less accurate than the weights stored in memory cells of the first type, and more accurate than the weights stored in memory cells of the second type. In some embodiments, the memory system 200 may include any number of types of memory cells, each type of memory cell having a different degree of weight fluctuations.
Input/output circuits 593 receive input data from sources external to the in-memory computing device 500. The input/output circuits 593 also drive output data to destinations external to the in-memory computing device 500. Input/output data and control signals are moved via data bus 505 between the input/output circuits 593, the controller 504 and input/output ports on the in-memory computing device 500 or other data sources internal or external to the in-memory computing device 500, such as a general purpose processor or special purpose application circuitry, or a combination of modules providing system-on-a-chip functionality supported by the memory system 502. Buffer circuits 590 can be coupled to the input/output circuits 593 and the controller 504 to store input/output data and control signals.
The controller 504 can include circuits for selectively applying program voltages to the memory cells of the first type in the synaptic layers of the first type, and the memory cells of the second type in the synaptic layers of the second type in the memory system 502 in response to the input data and control signals in the buffer circuits 590. In the example shown in
The synaptic layer of the first type of synaptic layer 610 also includes a set of second access lines 613 coupled to the memory cells of the first type in respective columns of the memory cells of the first type, and a column decoder/drivers 616. A set of sensing circuits 617, such as sense amplifiers, is coupled to respective second access lines in the set of second access lines via the column decoder/drivers 616. In some embodiments, the set of sensing circuits 617 may be coupled to second access lines in the set of second access line directly. For sum-of-products operations using the array of the memory cells of the first type, the set of sensing circuits 617 can sense current at second access lines 613 from the array of the memory cells of the first type of memory cell 611. Currents sensed at a particular second access line in the set of second access lines can represent a sum-of-products of the inputs asserted in the first access lines and the weight factor stored in the array of the memory cells of the first type of memory cell 611. Sensed data from the second access lines are supplied to the data buffer 618. The data buffer 618 can store the sum-of-products from the array of the memory cells of the first type of memory cell 611.
Memory cell addresses and input data from external sources are supplied from the controller 504 to the row decoder/drivers 615 of the array of the memory cells of the first type of memory cell 611 through the bus 503. Input data from other synaptic layers can also be supplied to the row decoder/drivers 615 of the array of the memory cells of the first type of memory cell 611. For memory read operations, sensed data from the sensing circuits 617 are supplied to the data buffer 618, which is in turn coupled to the controller 504 via the bus 503.
Similar to the first type of synaptic layer 610, the first type of synaptic layer 620 includes an array of memory cells of the first type of memory cell 621, a row decoder/drivers 625, a column decoder/drivers 626, a set of sensing circuits 627 and a data buffer 628.
The memory system 600 also includes the synaptic layers of the second type of synaptic layer 630 and 640. The second type of synaptic layer 630 includes an array of memory cells of the second type of memory cell 631, a row decoder/drivers 635, a column decoder/drivers 636, a set of sensing circuits 637 and a data buffer 638. The second type of synaptic layer 640 includes an array of memory cells of the second type of memory cell 641, a row decoder/drivers 645, a column decoder/drivers 646, a set of sensing circuits 647 and a data buffer 648.
For the first and second types of memory cells that include phase change memories, resistive RAMs, ferroelectric RAMs, and magnetic RAMs, the first access lines can be bit lines and the second access lines can be word lines or vice versa. For charge trapping memories, the first access lines can be word lines and the second access lines can be bit lines. Charge trapping memories may also have third access lines such as source lines.
A set of second access lines (e.g., 791, 792, and 793) is coupled to the memory cells of the first type in respective columns of the memory cells of the first type. A set of first access lines (e.g., 781, 782) is coupled to the memory cells of the first type in respective rows of memory cells of the first type. The set of first access lines (e.g., 781, 782) is coupled to the row decoder/drivers 615 and the set of second access lines are coupled to the column decoder 616. Signals on the first access lines in the set of first access lines can represent inputs x1, x2 to the respective rows. As shown in
The sensing circuit 617 is coupled to respective second access lines in the set of second access lines via the column decoder 616. Current (e.g., y1, y2, y3) sensed at a particular second access line (e.g., 791, 792, 793) in the set of second access lines can represent a sum-of-products of the inputs x1, x2 by respective weight factors Wmn The sum-of-products y1, y2, y3 can be stored in the data buffer 618. The stored sum-of-products can be sent to the array of memory cells of the first type of memory cell 621 in the synaptic layer of the first type of synaptic layer 620 of the memory system 600.
The array of memory cells of the first type of memory cell 621 includes three rows and two columns. Each memory cell in the array represents a weight factor Wmn of the cell. The memory cells 731, 732, 741, 742, 751, and 752 of the first type of memory cells store weights w31, w32, w41, w42, w51, and w52 respectively.
A set of second access lines (e.g., 771, 772) is coupled to the memory cells in respective columns of memory cells. A set of first access lines (e.g., 761, 762, and 763) is coupled to the memory cells in respective rows of memory cells. The set of first access lines (e.g., 761, 762, 763) is coupled to the row decoder/drivers 625 and the set of second access lines is coupled to the column decoder 626. The row decoder/drivers 625 receives input signals y1, y2, y3 from the array of memory cells of the first type of memory cell 611 in the synaptic layer of the first type of synaptic layer 610 and asserts the signals on the first access lines in the set of first access lines. As shown in
The sensing circuit 627 is coupled to respective second access lines in the set of second access lines via the column decoder 626. Current (e.g., z1, z2) sensed at a particular second access line (e.g., 771, 772) in the set of second access lines can represent a sum-of-products of the inputs by respective weight factors. The sum-of-products z1, z2 can be stored in the data buffer 628. The stored sum-of-products can be sent to the array of the second type of memory cell 631 in the second type of synaptic layer 630 of the memory system 600 or to the controller 504 in
The array of memory cells of the first type of memory cell 621 includes three rows and two columns of memory cells of the first type, where a set of third access lines, such as the source lines of charge trapping memories, is coupled to the memory cells of the first type in respective columns of memory cells of the first type. Memory cells of the first type in the array can each comprise a transistor having a voltage threshold, representing a weight factor Wmnof the cell. The memory cells 811, 812, 821, 822, 831, and 832 store weights w31, w32, w41, w42, w51, and w52, respectively.
A set of second access lines (e.g., 851, 852) is coupled to the memory cells of the first type in respective columns of memory cells of the first type. A set of first access lines (e.g., 841, 842, and 843) is coupled to the memory cells of the first type in respective rows of memory cells of the first type. The set of first access lines (e.g., 841, 842, and 843) are coupled to the row decoder/drivers 625. Signals on the first access lines in the set of first access lines can represent inputs y1, y2, and y3 to the respective rows. As shown in
The set of third access lines (e.g., 861, 862) is coupled to the column decoder 626. The sensing circuit 627 is coupled to respective third access lines in the set of third access lines. Current (e.g., z1, z2) sensed at a particular second access line (e.g., 861, 862) in the set of third access lines can represent a sum-of-products of the inputs y1, y2, y3 by respective weight factors Wmn. The sum of products z1, z2 can be stored in the data buffer 628. The stored sum-of-products z1, z2 can be sent to the array of memory cells of the second type of memory cells 631 in the synaptic layers of the second type of synaptic layer 630 of the memory system 600.
The array of memory cells of the second type of memory cell 631 includes two rows and three columns of resistive RAMs. Each memory cell of the second type in the array represents a weight factor W of the cell. The memory cells 871, 872, 873, 881, 882, and 883 store weights w71, w72, w73, w81, w82, and w83, respectively.
A set of second access lines (e.g., 863, 864, and 865) is coupled to the memory cells in respective columns of memory cells. A set of first access lines (e.g., 853, 854) is coupled to the memory cells in respective rows of memory cells. The set of first access lines (e.g., 853, 854) are coupled to the row decoder/drivers 635 and the set of second access lines (e.g., 863, 864, and 865) are coupled to the column decoder 636. The row decoder/drivers 635 receives input signals z1, z2 from the array of memory cells of the first type of memory cells 621 in the synaptic layer of the first type of synaptic layer 610 and asserts the signals on the first access lines in the set of first access lines. A signal input z1 is asserted on the first access line 853, and a signal input z2 is asserted on the first access line 854.
Current (e.g., a1, a2, a3) sensed at a particular second access line (e.g., 863, 864, 865) in the set of second access lines can represent a sum-of-products of the inputs by respective weight factors. The sum-of-products a1, a2, a3 can be stored in the data buffer 628.
Digital representations of weights can also be stored in arrays of memory cells of the first type.
Memory cells of the first type in synaptic layers of the first type in the memory system 502 store weights for the synaptic layers of a neural network near the input layer. Memory cells of the second type in the synaptic layers of the second type store weights for the synaptic layers near the output layer of the neural network. The multiplier and accumulator unit 1010 performs the sum-of-products calculation with the input data received from sources external to the in-memory computing device and weights stored in the memory cells of the first type, and the memory cells of the second type in the memory system 502. The multiplier and accumulator unit 1010 may be a general purpose processor or special purpose application circuitry, or a combination of modules providing system-on-a-chip functionality.
The controller 504 can further include circuitry for supplying addresses for memory cells storing weights for the Nth synaptic layers to row and column decoders in the memory system 502 and inputs for the Nth synaptic layer to the multiplier and accumulator unit 1010. The multiplier and accumulator unit 1010 receives weights stored for the Nth synaptic layer from the memory system 502 to compute sum-of-products. The multiplier and accumulator unit provides the sum-of-products to the controller 504 as the output for the Nth synaptic layer. The output for the Nth synaptic layer can be used as the inputs for the N+1th synaptic layer, or the output can be the final output of the neural network.
The first type of memory cell is less prone to weight fluctuations than the second type of memory cell. In some embodiments, the fabrication process of the first type of memory cell may be different than the fabrication process of the second type of memory cell, the first type of memory cell having less device variability when compared to the second type of memory cell.
In some embodiments of the in-memory computing device, the first type of memory cell and the second type of memory cell may include the same type of memory cell (e.g., resistive RAMs), and the size of the memory cells of the first type may be bigger than the memory cells of the second type. The larger memory cells of the first type will be less noisy than the memory cells of the second type, resulting in less weight fluctuations in the memory cells of the first type.
In some embodiments, memory cells of the first type, and memory cells of the second type, may include different memory cells, i.e., the structure of memory cells of the first type being different than memory cells of the second type. Memory cells of the first type may include volatile memory cells (e.g., SRAM and DRAM), and memory cells of the second type may include non-volatile memory cells (e.g., resistive RAM). In some embodiments, memory cells of the first type may include read-only memory cells (e.g., fuse ROM), and memory cells of the second type may include reprogrammable memory cells (e.g., resistive RAM).
Memory cells of the first type may be volatile memory cells (e.g., SRAM and DRAM) or non-volatile memory cells (e.g., mask ROM, fuse ROM, and resistive RAM). Memory cells of the first type may be read-only memory cells (e.g., mask ROM, fuse ROM) or reprogrammable memory cells (e.g., SRAM, DRAM, and resistive RAM). In some embodiments, the weights stored in memory cells of the first type may be the resistance of the memory cells, for example, memory cells such as resistive RAM, floating gate MOSFETs, dielectric charge trapping devices (e.g., SONOS, BE-SONOS, TANOS, MA BE-SONOS) and phase change memories. In some embodiments, the weights stored may be the two or more bits information stored in the memory cells, for example, bits “0” and “1” in SRAMs, DRAMs, and ROMs.
Memory cells of the second type may be non-volatile memory cells, such as resistive RAM, floating gate MOSFETs, dielectric charge trapping devices (e.g., SONOS, BE-SONOS, TANOS, MA BE-SONOS), phase change memories, ferroelectric RAMS, and magnetic RAMs. In some embodiments, the weights stored in memory cells of the second type may be the resistance of the memory cells, for example, memory cells such as resistive RAM, floating gate MOSFETs, dielectric charge trapping devices (e.g., SONOS, BE-SONOS, TANOS, MA BE-SONOS) and phase change memories.
At step 1330, peripheral circuitries supporting the in-memory computing device are formed. The peripheral circuitry may be row decoder/drivers (e.g., row decoders/drivers 615, 625, 635, 645 in
While the present invention is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the invention and the scope of the following claims.
This application claims the benefit of U.S. Provisional Patent Application No. 62/698,982 filed 17 Jul. 2018; which application is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
4219829 | Dorda et al. | Aug 1980 | A |
4987090 | Hsu et al. | Jan 1991 | A |
5029130 | Yeh | Jul 1991 | A |
5586073 | Hiura et al. | Dec 1996 | A |
6107882 | Gabara et al. | Aug 2000 | A |
6313486 | Kencke et al. | Nov 2001 | B1 |
6829598 | Milev | Dec 2004 | B2 |
6906940 | Lue | Jun 2005 | B1 |
6960499 | Nandakumar et al. | Nov 2005 | B2 |
7089218 | Visel | Aug 2006 | B1 |
7368358 | Ouyang et al. | May 2008 | B2 |
7436723 | Rinerson et al. | Oct 2008 | B2 |
7747668 | Nomura et al. | Jun 2010 | B2 |
8203187 | Lung et al. | Jun 2012 | B2 |
8275728 | Pino | Sep 2012 | B2 |
8432719 | Lue | Apr 2013 | B2 |
8589320 | Breitwisch et al. | Nov 2013 | B2 |
8630114 | Lue | Jan 2014 | B2 |
8725670 | Visel | May 2014 | B2 |
8860124 | Lue et al. | Oct 2014 | B2 |
9064903 | Mitchell et al. | Jun 2015 | B2 |
9147468 | Lue | Sep 2015 | B1 |
9213936 | Visel | Dec 2015 | B2 |
9379129 | Lue et al. | Jun 2016 | B1 |
9391084 | Lue | Jul 2016 | B2 |
9430735 | Vali et al. | Aug 2016 | B1 |
9431099 | Lee et al. | Aug 2016 | B2 |
9524980 | Lue | Dec 2016 | B2 |
9535831 | Jayasena et al. | Jan 2017 | B2 |
9536969 | Yang et al. | Jan 2017 | B2 |
9589982 | Cheng et al. | Mar 2017 | B1 |
9698156 | Lue | Jul 2017 | B2 |
9698185 | Chen et al. | Jul 2017 | B2 |
9710747 | Kang et al. | Jul 2017 | B2 |
9747230 | Han et al. | Aug 2017 | B2 |
9754953 | Tang et al. | Sep 2017 | B2 |
9767028 | Cheng et al. | Sep 2017 | B2 |
9898207 | Kim et al. | Feb 2018 | B2 |
9910605 | Jayasena et al. | Mar 2018 | B2 |
9978454 | Jung | May 2018 | B2 |
9983829 | Ravimohan et al. | May 2018 | B2 |
9991007 | Lee et al. | Jun 2018 | B2 |
10037167 | Kwon et al. | Jul 2018 | B2 |
10056149 | Yamada et al. | Aug 2018 | B2 |
10073733 | Jain et al. | Sep 2018 | B1 |
10157012 | Kelner et al. | Dec 2018 | B2 |
10175667 | Bang et al. | Jan 2019 | B2 |
10242737 | Lin et al. | Mar 2019 | B1 |
10528643 | Choi et al. | Jan 2020 | B1 |
10534840 | Petti | Jan 2020 | B1 |
10643713 | Louie et al. | May 2020 | B1 |
10825510 | Jaiswal et al. | Nov 2020 | B2 |
10860682 | Knag et al. | Dec 2020 | B2 |
20030122181 | Wu | Jul 2003 | A1 |
20050287793 | Blanchet et al. | Dec 2005 | A1 |
20100182828 | Shima et al. | Jul 2010 | A1 |
20100202208 | Endo et al. | Aug 2010 | A1 |
20110063915 | Tanaka et al. | Mar 2011 | A1 |
20110106742 | Pino | May 2011 | A1 |
20110128791 | Chang et al. | Jun 2011 | A1 |
20110286258 | Chen et al. | Nov 2011 | A1 |
20110297912 | Samachisa et al. | Dec 2011 | A1 |
20120007167 | Hung et al. | Jan 2012 | A1 |
20120044742 | Narayanan | Feb 2012 | A1 |
20120182801 | Lue | Jul 2012 | A1 |
20120235111 | Osano et al. | Sep 2012 | A1 |
20120254087 | Visel | Oct 2012 | A1 |
20130070528 | Maeda | Mar 2013 | A1 |
20130075684 | Kinoshita et al. | Mar 2013 | A1 |
20140063949 | Tokiwa | Mar 2014 | A1 |
20140119127 | Lung et al. | May 2014 | A1 |
20140149773 | Huang et al. | May 2014 | A1 |
20140268996 | Park | Sep 2014 | A1 |
20140330762 | Visel | Nov 2014 | A1 |
20150008500 | Fukumoto et al. | Jan 2015 | A1 |
20150171106 | Suh | Jun 2015 | A1 |
20150199126 | Jayasena et al. | Jul 2015 | A1 |
20150331817 | Han et al. | Nov 2015 | A1 |
20160141337 | Shimabukuro et al. | May 2016 | A1 |
20160181315 | Lee et al. | Jun 2016 | A1 |
20160232973 | Jung | Aug 2016 | A1 |
20160247579 | Ueda et al. | Aug 2016 | A1 |
20160308114 | Kim et al. | Oct 2016 | A1 |
20160336064 | Seo et al. | Nov 2016 | A1 |
20160358661 | Vali et al. | Dec 2016 | A1 |
20170003889 | Kim et al. | Jan 2017 | A1 |
20170025421 | Sakakibara et al. | Jan 2017 | A1 |
20170092370 | Harari | Mar 2017 | A1 |
20170123987 | Cheng et al. | May 2017 | A1 |
20170148517 | Harari | May 2017 | A1 |
20170160955 | Jayasena et al. | Jun 2017 | A1 |
20170169885 | Tang et al. | Jun 2017 | A1 |
20170169887 | Widjaja | Jun 2017 | A1 |
20170263623 | Zhang et al. | Sep 2017 | A1 |
20170270405 | Kurokawa | Sep 2017 | A1 |
20170309634 | Noguchi et al. | Oct 2017 | A1 |
20170316833 | Ihm et al. | Nov 2017 | A1 |
20170317096 | Shin et al. | Nov 2017 | A1 |
20170337466 | Bayat et al. | Nov 2017 | A1 |
20180121790 | Kim et al. | May 2018 | A1 |
20180129424 | Confalonieri et al. | May 2018 | A1 |
20180144240 | Garbin | May 2018 | A1 |
20180157488 | Shu et al. | Jun 2018 | A1 |
20180173420 | Li et al. | Jun 2018 | A1 |
20180189640 | Henry | Jul 2018 | A1 |
20180240522 | Jung | Aug 2018 | A1 |
20180246783 | Avraham et al. | Aug 2018 | A1 |
20180286874 | Kim et al. | Oct 2018 | A1 |
20180342299 | Yamada et al. | Nov 2018 | A1 |
20180350823 | Or-Bach et al. | Dec 2018 | A1 |
20190019564 | Li et al. | Jan 2019 | A1 |
20190035449 | Saida et al. | Jan 2019 | A1 |
20190043560 | Sumbul et al. | Feb 2019 | A1 |
20190065151 | Chen et al. | Feb 2019 | A1 |
20190102170 | Chen et al. | Apr 2019 | A1 |
20190148393 | Lue | May 2019 | A1 |
20190164044 | Song | May 2019 | A1 |
20190213234 | Bayat et al. | Jul 2019 | A1 |
20190220249 | Lee et al. | Jul 2019 | A1 |
20190244662 | Lee et al. | Aug 2019 | A1 |
20190286419 | Lin et al. | Sep 2019 | A1 |
20190311749 | Song | Oct 2019 | A1 |
20190325959 | Bhargava | Oct 2019 | A1 |
20190363131 | Torng | Nov 2019 | A1 |
20200026993 | Otsuka | Jan 2020 | A1 |
20200065650 | Tran et al. | Feb 2020 | A1 |
20200110990 | Harada | Apr 2020 | A1 |
20200160165 | Sarin | May 2020 | A1 |
20200334015 | Shibata et al. | Oct 2020 | A1 |
Number | Date | Country |
---|---|---|
1998012 | Nov 2010 | CN |
105718994 | Jun 2016 | CN |
105789139 | Jul 2016 | CN |
106530210 | Mar 2017 | CN |
2048709 | Apr 2009 | EP |
201523838 | Jun 2015 | TW |
201618284 | May 2016 | TW |
201639206 | Nov 2016 | TW |
201732824 | Sep 2017 | TW |
201741943 | Dec 2017 | TW |
201802800 | Jan 2018 | TW |
201822203 | Jun 2018 | TW |
2012009179 | Jan 2012 | WO |
2012015450 | Feb 2012 | WO |
2016060617 | Apr 2016 | WO |
2017091338 | Jun 2017 | WO |
2018201060 | Nov 2018 | WO |
Entry |
---|
EP Extended Search Report from 18155279.5-1203 dated Aug. 30, 2018, 8 pages. |
EP Extended Search Report from EP18158099.4 (corresponding to MXIC 2230) dated Sep. 19, 2018, 8 pages. |
Jang et al., “Vertical cell array using TCAT(Terabit Cell Array Transistor) technology for ultra high density NAND flash memory,” 2009 Symposium on VLSI Technology, Honolulu, HI, Jun. 16-18, 2009, pp. 192-193. |
Kim et al. “Multi-Layered Vertical Gate NAND Flash Overcoming Stacking Limit for Terabit Density Storage,” 2009 Symposium on VLSI Technology Digest of Papers, Jun. 16-18, 2009, 2 pages. |
Kim et al. “Novel Vertical-Stacked-Array-Transistor (VSAT) for Ultra-High-Density and Cost-Effective NAND Flash Memory Devices and SSD (Solid State Drive)”, Jun. 2009 Symposium on VLSI Technolgy Digest of Technical Papers, pp. 186-187. (cited in parent—copy not provided herewith). |
Lue et al., “A Novel 3D AND-type NVM Architecture Capable of High-density, Low-power In-Memory Sum-of-Product Computation for Artificial Intelligence Application,” IEEE VLSI, Jun. 18-22, 2018, 2 pages. |
Ohzone et al., “Ion-Implanted Thin Polycrystalline-Silicon High-Value Resistors for High-Density Poly-Load Static RAM Applications,” IEEE Trans. on Electron Devices, vol. ED-32, No. 9, Sep. 1985, 8 pages. |
Sakai et al., “A Buried Giga-Ohm Resistor (BGR) Load Static RAM Cell,” IEEE Symp. on VLSI Technology, Digest of Papers, Sep. 10-12, 1984, 2 pages. |
Schuller et al., “Neuromorphic Computing: From Materials to Systems Architecture,” US Dept. of Energy, Oct. 29-30, 2015, Gaithersburg, MD, 40 pages. |
Seo et al., “A Novel 3-D Vertical FG NAND Flash Memory Cell Arrays Using the Separated Sidewall Control Gate (S-SCG) for Highly Reliable MLC Operation,” 2011 3rd IEEE International Memory Workshop (IMW), May 22-25, 2011, 4 pages. |
Soudry, et al. “Hebbian learning rules with memristors,” Center for Communication and Information Technologies CCIT Report #840, Sep. 1, 2013, 16 pages. |
Tanaka H., et al., “Bit Cost Scalable Technology with Punch and Plug Process for Ultra High Density Flash Memory,” 2007 Symp. VLSI Tech., Digest of Tech. Papers, pp. 14-15. |
U.S. Office Action in U.S. Appl. No. 15/887,166 dated Jul. 10, 2019, 18 pages. |
U.S. Office Action in U.S. Appl. No. 15/887,166 dated Jan. 30, 2019, 18 pages. |
U.S. Office Action in U.S. Appl. No. 15/922,359 dated Jun. 24, 2019, 8 pages. |
U.S. Office Action in related case U.S. Appl. No. 15/873,369 dated May 9, 2019, 8 pages. |
Whang, SungJin et al. “Novel 3-dimensional Dual Control-gate with Surrounding Floating-gate (DC-SF) NAND flash cell for 1Tb file storage application,” 2010 IEEE Int'l Electron Devices Meeting (IEDM), Dec. 6-8, 2010, 4 pages. |
Anonymous, “Data in the Computer”, May 11, 2015, pp. 1-8, https://web.archive.org/web/20150511143158/https:// homepage.cs.uri .edu/faculty/wolfe/book/Readings/Reading02.htm (Year. 2015)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner. |
Rod Nussbaumer, “How is data transmitted through wires in the computer?”, Aug. 27, 2015, pp. 1-3, https://www.quora.com/ How-is-data-transmitted-through-wires-in-the-computer (Year: 2015)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner. |
Scott Thornton, “What is DRAm (Dynamic Random Access Memory) vs SRAM?”, Jun. 22, 2017, pp. 1-11, https://www .microcontrollertips.com/dram-vs-sram/ (Year: 2017)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner. |
TW Office Action from TW Application No. 10920683760, dated Jul. 20, 2020, 4 pages. |
U.S. Office Action in U.S. Appl. No. 16/233,404 dated Jul. 30, 2020, 20 pages. |
U.S. Office Action in U.S. Appl. No. 16/279,494 dated Aug. 17, 2020, 25 pages. |
Webopedia, “DRAM—dynamic random access memory”, Jan. 21, 2017, pp. 1-3, https://web.archive.org/web/20170121124008/https://www.webopedia.com/TERM/D/DRAM.html (Year: 2017)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner. |
Webopedia, “volatile memory”, Oct. 9, 2017, pp. 1-4, https://web.archive.org/web/20171009201852/https://www.webopedia.com/TERMN/volatile_memory.html (Year 2017)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner. |
Chen et al., “Eyeriss: An Energy-Efficient reconfigurable accelerator for deep convolutional neural networks,” IEEE ISSCC, Jan. 31-Feb. 4, 2016, 3 pages. |
EP Extended Search Report from EP19193290.4 dated Feb. 14, 2020, 10 pages. |
Gonugondla et al., “Energy-Efficient Deep In-memory Architecture for NAND Flash Memories,” IEEE International Symposium on Circuits and Systems (ISCAS), MAy 27-30, 2018, 5 pages. |
Jung et al, “Three Dimensionally Stacked NAND Flash Memory Technology Using Stacking Single Crystal Si Layers on ILD and TANOS Structure for Beyond 30nm Node,” International Electron Devices Meeting, 2006. IEDM '06, Dec. 11-13, 2006, pp. 1-4. |
Lai et al., “A Multi-Layer Stackable Thin-Film Transistor (TFT) NAND-Type Flash Memory,” Electron Devices Meeting, 2006, IEDM '06 International, Dec. 11-13, 2006, pp. 1-4. |
TW Office Action from TW Application No. 10820980430, dated Oct. 16, 2019, 6 pages (with English Translation). |
U.S. Office Action in U.S. Appl. No. 15/873,369 dated Dec. 4, 2019, 5 pages. |
U.S. Office Action in U.S. Appl. No. 15/922,359 dated Oct. 11, 2019, 7 pages. |
U.S. Office Action in U.S. Appl. No. 16/233,414 dated Oct. 31, 2019, 22 pages. |
U.S. Office Action in related case U.S. Appl. No. 16/037,281 dated Dec. 19, 2019, 9 pages. |
U.S. Office Action in related case U.S. Appl. No. 16/297,504 dated Feb. 4, 2020, 15 pages. |
Wang et al., “Three-Dimensional NAND Flash for Vector-Matrix Multiplication,” IEEE Trans. on Very Large Scale Integration Systems (VLSI), vol. 27, No. 4, Apr. 2019, 4 pages. |
U.S. Office Action in U.S. Appl. No. 16/233,414 dated Apr. 20, 2020, 17 pages. |
Aritome, et al., “Reliability issues of flash memory cells,” Proc. of the IEEE, vol. 81, No. 5, May 1993, pp. 776-788. |
Chen et al., “A Highly Pitch Scalable 3D Vertical Gate (VG) NAND Flash Decoded by a Novel Self-Aligned Independently Controlled Double Gate (IDG) StringSelect Transistor (SSL),” 2012 Symp. on VLSI Technology (VLSIT), Jun. 12-14, 2012, pp. 91-92. |
Choi et al., “Performance Breakthrough in NOR Flash Memory With Dopant-Segregated Schottky-Barrier (DSSB) SONOS Devices,” Jun. 2009 Symposium on VLSITechnology Digest of Technical Papers, pp. 222-223. |
Fukuzumi et al. “Optimal Integration and Characteristics of Vertical Array Devices for Ultra-High Density, Bit-Cost Scalable Flash Memory,” IEEE Dec. 2007, pp. 449-452. |
Guo et al., “Fast, energy-efficient, robust, and reproducible mixed-signal nauromorphic classifier based on embedded NOR flash memory technology,” IEEE Int'l Electron Devices Mtg., San Francisco, CA, Dec. 2-6, 2017, 4 pages. |
Hsu et al., “Study of Sub-30nm Thin Film Transistor (TFT) Charge-Trapping (CT) Devices for 3D NAND Flash Application,” 2009 IEEE, Dec. 7-9, 2009, pp. 27.4.1-27.4.4. |
Hubert et al., “A Stacked Sonos Technology, Up to 4 Levels and 6nm Crystalline Nanowires, With Gate-All-Around or Independent Gates (Flash), Suitable for Full 3D Integration,” IEEE 2009, Dec. 7-9, 2009, pp. 27.6.1-27.6.4. |
Hung et al., “A highly scalable vertical gate (VG) 3D NAND Flash with robust program disturb immunity using a novel PN diode decoding structure,” 2011 Symp. on VLSI Technology (VLSIT), Jun. 14-16, 2011, pp. 68-69. |
Katsumata et al., “Pipe-shaped BiCS Flash Memory With 16 Stacked Layers and Multi-Level-Cell Operation for Ultra High Density Storage Devices,” 2009 Symposium on VLSI Technology Digest of Technical Papers, Jun. 16-18, 2009, pp. 136-137. |
Kim et al., “Novel 3-D Structure for Ultra High Density Flash Memory with VRAT (Vertical-Recess-Array-Transistor) and PIPE (Planarized Integration on the same PlanE),” IEEE 2008 Symposium on VLSI Technology Digest of Technical Papers, Jun. 17-19, 2008, pp. 122-123. |
Kim et al., “Three-Dimensional NAND Flash Architecture Design Based on Single-Crystalline STacked ARray,” IEEE Transactions on Electron Devices, vol. 59, No. 1, pp. 35-45, Jan. 2012. |
Lue et al., “A Highly Scalable 8-Layer 3D Vertical-Gate (VG) TFT NAND Flash Using Junction-Free Buried Channel BE-SONOS Device”, 2010 Symposium on VLSI Technology Digest of Technical Papers, pp. 131-132, Jun. 15-17, 2010. |
Merrikh-Bayat et al., “High-Performance Mixed-Signal Neurocomputing with Nanoscale Flowting-Gate Memory Cell Arrays,” in IEEE Transactions on Neural Netowrks and Learning Systems, vol. 29, No. 10, Oct. 2018, pp. 4782-4790. |
Nowak et al., “Intrinsic fluctuations in Vertical NAND flash memories,” 2012 Symposium on VLSI Technology, Digest of Technical Papers, pp. 21-22, Jun. 12-14, 2012. |
U.S. Office Action in U.S. Appl. No. 16/279,494 dated Nov. 12, 2020, 25 pages. |
U.S. Office Action in U.S. Appl. No. 16/359,919 dated Oct. 16, 2020, 13 pages. |
Wang, Michael, “Technology Trends on 3D-NAND Flash Storage”, Impact 2011, Taipei, dated Oct. 20, 2011, found at www.impact.org.tw/2011/Files/NewsFile/201111110190.pdf. |
Webopedia, “SoC”, Oct. 5, 2011, pp. 1-2, https://web.archive.org/web/20111005173630/https://www.webopedia.com/ TERM/S/SoC.html (Year: 2011)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no month provided by examiner. |
Y.X. Liu et al., “Comparative Study of Tri-Gate and Double-Gate-Type Poly-Si Fin-Channel Spli-Gate Flash Memories,” 2012 IEEE Silicon Nanoelectronics Workshop (SNW), Honolulu, HI, Jun. 10-11, 2012, pp. 1-2. |
U.S. Office Action in U.S. Appl. No. 16/359,919 dated Mar. 3, 2021, 15 pages. |
Number | Date | Country | |
---|---|---|---|
20200026991 A1 | Jan 2020 | US |
Number | Date | Country | |
---|---|---|---|
62698982 | Jul 2018 | US |