The present disclosure relates to memory devices and, more particularly, to memory devices including volatile and non-volatile memory cells.
A neural network is an information processing paradigm that is inspired by the way biological nervous systems process information. With the availability of large training datasets and sophisticated learning algorithms, neural networks have facilitated major advances in numerous domains such as computer vision, speech recognition, and natural language processing.
The basic unit of computation in a neural network is a neuron. A neuron receives inputs from other neurons, or from an external source and computes an output.
In the sum-of-products expression above, each product term is a product of a variable input xi and a weight wi. The weight wi can vary among the terms, corresponding, for example, to coefficients of the variable inputs xi. Similarly, outputs from the other neurons in the hidden layer can also be calculated. The outputs of the two neurons in the hidden layer 110 act as inputs to the output neuron in the output layer 104.
Neural networks can be used to learn patterns that best represent a large set of data. The hidden layers closer to the input layer learn high level generic patterns, and the hidden layers closer to the output layer learn more data-specific patterns. Training is a phase in which a neural network learns from training data. During training, the connections in the synaptic layers are assigned weights based on the results of the training session. Inference is a stage in which a trained neural network is used to infer/predict input data and produce output data based on the prediction.
A convolutional neural network is a type of neural network that comprises one or more convolutional hidden layers after the input layer which are then followed by one or more fully connected hidden layers. A convolutional neural network is most commonly applied to analyze 2D data, such as object recognition within images. In a convolution hidden layer, a dot product between an area of an input image and a weight matrix can be calculated by sliding the weight matrix through the whole image and repeating the same dot product operation. The convolutional hidden layers are used to detect high-level features of the input image. The output of the last convolutional hidden layer is the input of the first fully connected hidden layer. Every neuron in a fully connected hidden layer is connected to every neuron in the adjacent fully connected hidden layers. The purpose of the fully connected hidden layers is to use a non-linear combination of the features detected in the convolution hidden layers to classify the objects in the input image.
In-memory computing is an approach in which memory cells, organized in an in-memory computing device, can be used for both data processing and memory storage. A neural network or a convolution neural network can be implemented in an in-memory computing device. The weights for the sum-of-products function can be stored in memory cells of the in-memory computing device. The sum-of-products function can be realized as a circuit operation in the in-memory computing device in which the electrical characteristics of the memory cells of the array effectuate the function.
In in-memory computing devices with volatile memory cells (e.g., SRAM), the time taken for performing sum-of-products operations may be short, and the operations may have high inference accuracies. However, it may take a long time to load weights in the volatile memory cells from other memory cells storing the weights needed for the sum-of-products operations. Also, performing sum-of-products operations with volatile memory cells may result in large power consumption.
In in-memory computing devices with non-volatile memory cells, device variability in the memory cells, inaccurate read and write operations, and other non-ideal device characteristics can lead to fluctuations in the weights stored in the non-volatile memory cells. The fluctuations in the stored weights, especially in in-memory computing devices using reprogrammable non-volatile memory devices such as floating gate memories, phase change memories, resistive RAMs, etc., can lead to less accurate output data by the neural networks implemented in the in-memory computing devices.
It is desirable to provide an in-memory computing device, with higher inference accuracy that can perform fast and low-power sum-of-products operations.
An integrated circuit is described herein that comprises an in-memory computing device implementing a neural network. In some embodiments, the in-memory computing device can implement a convolution neural network. The in-memory computing device has an array of composite memory units. Each composite memory unit comprises a first memory cell of a first type, a second memory cell of a second type, a first intra-unit data path connecting the first memory cell to the second memory cell and a first data path control switch. The first intra-unit data path connects a current carrying terminal of the first memory cell to a current carrying terminal of the second memory cell. The first data path control switch is responsive to a data transfer enable signal which enables data transfer between the first memory cell and the second memory cell through the first intra-unit data path.
The first type of memory cells may be volatile memory cells (e.g., SRAM) whereas the second type of memory cells may be volatile memory cells (e.g., floating gate memories, phase change memories, resistive RAMs, magnetoresistive RAMs, ferroelectric RAMs, etc.). The first memory cells in the array of composite memory units are configured for fast and more accurate sum-of-products operations. The second memory cells in the array of composite memory units are configured to store weights for the synaptic layers of neural networks. The second memory cells in the array of composite memory units may also be configured to store the results of sum-of-products operations.
First memory cells and second memory cells in rows of composite memory units in the array are coupled to a set of first word lines and a set of second word lines, respectively. First memory cells and second memory cells in columns of composite memory units in the array are coupled to a set of first bit lines and a set of second bit lines, respectively. Second memory cells in columns of composite memory units are coupled to a set of first source lines. The array of composite memory units may further comprise signal control circuitry electrically coupled to the set of first word lines, the set of second word lines, the set of first bit lines, the set of second bit lines and the set of first source lines. The signal control circuitry may also assert data transfer enable signals to first data path control switches in the array of composite memory units.
In some embodiments of an in-memory computing device, each composite memory unit may further comprise a third memory cell of the second type. A second intra-unit data path may connect the first memory cell to the third memory cell. A second data path control switch responsive to a data transfer enable signal enables data transfer between the first memory cell and the third memory cell through the second intra-unit data path.
Also described are methods of transferring data between the memory cells in a composite memory unit, methods of performing sum-of-products operations using composite memory units, and control circuits arranged to carry out the methods.
Other aspects and advantages of the present disclosure can be seen on review of the drawings, the detailed description, and the claims, which follow.
A detailed description of embodiments of the present technology is provided with reference to the
A detailed description of embodiments of the present disclosure is provided with reference to the
The first type of memory cells may be volatile memory cells (e.g., SRAM). The weight stored in the first memory cell 202 may be the information stored in the memory cells, for example, the SRAM storing bits “0” and “1”. The second type of memory cells may be non-volatile memory cells (e.g., floating gate memories, phase change memories, resistive RAMs, magnetoresistive RAMs, ferroelectric RAMs, etc.). In some embodiments, the second type of memory cells may be accompanied by a transistor (e.g., 1T-1R resistive RAMs). Memory cells of the second type may be reprogrammable memory cells so that weights stored in the second type of memory cell can be changed while training the neural network or fine-tuning the neural network for higher inference accuracy. In some embodiments, the weights stored in memory cells of the second type may be sensed based on the resistances of the memory cells, for example, memory cells such as resistive RAM, floating gate MOSFETs, dielectric charge trapping devices (e.g., SONOS, BE-SONOS, TANOS, MA BE-SONOS) and phase change memories.
The first memory cell 202 can be used to a store a weight WF and perform a sum-of-products operation with the stored weight given an input x. The output of the sum-of-products operation is x* WF. The second memory cell 204 can be used to a store a weight WS and perform a sum-of-products operation with the stored weight given an input y. The output of the sum-of-products operation is y*WS. The second memory cell 204 can also be used to store the weight WF for the first memory cell 202. Before a sum-of-products operation by the first memory cell, the weight stored in the second memory cell can be loaded into the first memory cell through the first intra-unit data path 208. The first memory cell 202 can store the result of the sum-of-products operation in the second memory cell 204 through the first intra-unit data path 208.
In addition to performing sum-of-products operations, the second memory cell 304 can be used to store a weight for the first memory cell 302. Before a sum-of-products operation by the first memory cell, the weight stored in the second memory cell can be loaded into the first memory cell through the first intra-unit data path 312. The first memory cell 302 can store the result of a sum-of-products operation in the third memory cell 306 through the second intra-unit data path 314.
The second memory cell 404 includes a transistor and a resistive RAM. The second memory cell 404 is electrically coupled to a second word line 428, a second bit line 430 and a first source line 434. A weight factor WS may be stored in the resistive RAM of the second memory cell 404.
An intra-unit data path 418 connects one of the current carrying terminals of the inverter (i.e., the terminal storing the weight factor WF) in the first memory cell 402 to one of the current carrying terminals of the resistive RAM of the second memory cell 404. The other current carrying terminal of the resistive RAM is connected to the transistor of the second memory cell 404. An N-channel transistor acting as the first data path control switch 406 controls the current flow or data transfer between the pair of cross-coupled inverters 407 of the first memory cell 402 and the resistive RAM of the second memory cell 404.
A first signal control circuitry, such as a row decoder and driver circuitry 440, is electrically coupled to the first memory cell 402 and the second memory cell 404 through the first word line 410 and the second word line 428, respectively. The row decoder and driver circuitry 440 is also coupled to the gate terminal of the N-channel transistor acting as the first data path control switch 406 through a conducting path 422. The row decoder and driver circuitry 440 may assert a first data transfer enable signal through the conducting path 422 to the first data path control switch 406 to allow current flow or data transfer between the pair of cross-coupled inverters 407 of the first memory cell 402 and the resistive RAM of the second memory cell 404. A second signal control circuitry, such as a column decoder and driver circuitry 442, is coupled to the first memory cell 402 through the first bit line 412 and the first bit complement line 414. The column decoder and driver circuitry 442 is also electrically coupled to the second memory cell 404 through the second bit line 430 and the source line 434. In some embodiments, the column decoder and driver circuitry 442 may include sense amplifiers.
The second memory cell 904 and the third memory cell 906 include a transistor and a resistive RAM. The second memory cell 504 and the third memory cell 906 are electrically coupled to a second word line 928. The second memory cell 904 is coupled to a second bit line 930 and a first source line 934. The third memory cell 906 is coupled to a third bit line 926 and a second source line 936.
A first intra-unit data path 918 connects a current carrying terminal of one of the inverters (i.e., the terminal storing the weight factor WF) in the first memory cell 902 to one of the current carrying terminals of the resistive RAM of the second memory cell 904. An N-channel transistor acting as the first data path control switch 950 controls the current flow or data transfer between the pair of cross-coupled inverters 907 of the first memory cell 902 and the resistive RAM of the second memory cell 904. A second intra-unit data path 916 connects the same current carrying terminal of one of the inverters in the first memory cell 902 to one of the current carrying terminals of the resistive RAM of the third memory cell 906. Another N-channel transistor acting as the second data path control switch 952 controls the current flow or data transfer between the pair of cross-coupled inverters 907 of the first memory cell 902 and the resistive RAM of the third memory cell 906.
A first signal control circuitry, such as the row decoder and driver circuitry 940, is electrically coupled to the first memory cell 902 through the first word line 910, and the second memory cell 904 and the third memory cell 906 through the second word line 928. The row decoder and driver circuitry 940 is also coupled to the gate terminals of the first data path control switch 950 and the second data path control switch 952 through the conducting paths 920 and 922, respectively.
A column decoder and driver circuitry 944 is coupled to the first memory cell 902 through the first bit line 912 and the first bit complement line 914. The column decoder and driver circuitry 944 is electrically coupled to the second memory cell 904 through the second bit line 930 and the first source line 934. The column decoder and driver circuitry 944 is electrically coupled to the third memory cell 906 through the third bit line 926 and the second source line 936.
Rows of composite memory units share common first word lines (e.g., common word lines 1110 and 1112) coupling the first memory cells in the rows to the row decoder and driver circuitry 1125. Rows of composite memory units also share common second word lines (e.g., common word lines 1114 and 1116) coupling the second memory cells in the rows to the row decoder and driver circuitry 1125. The row decoder and driver circuitry 1125 are also configured to assert data transfer enable signals to data path control switches in rows of composite memory units through common conducting paths (e.g., common conducting paths 1130 and 1132). In some embodiments, data transfer between the first memory cells and the second memory cells in a row of composite memory units can be enabled by asserting a common data transfer enable signal to all the data path control switches in the row. Data can be transferred from the first memory cells to the second memory cells in the row or from the second memory cells to the first memory cells.
Columns of composite memory units share common first bit lines (e.g., common first bit lines 1118 and 1120), common second bit lines (e.g., common second bit lines 1122 and 1124) and common first source lines (e.g., common first source lines 1126 and 1128). The common first bit lines, the common first bit complement lines, the common second bit lines, and the common first source lines couple the first memory and second cells to the column decoders and drivers 1152 and 1162. Columns of composite memory units also share common first bit complement lines (e.g., common first complement bit lines 1140 and 1142). First memory cells in the composite memory units 1102 and 1106 are coupled to the column decoders and drivers 1152 through the common first bit complement line 1140, and the first memory cells in the composite memory units 1104 and 1108 are coupled to the column decoders and drivers 1162 through the common first bit complement line 1142.
In some embodiments, signals on the first word lines represent inputs xi to the first memory cells in respective rows of composite memory units. Output current sensed at a particular first bit line by the column decoders and drivers 1152 and 1162 can represent a sum-of-products of the inputs xi by respective weight factors WF in the column of first memory cells coupled to the particular first bit line. In some embodiments, a signal on the common second bit line represents an input x to the second memory cells in a column of composite memory units. Output current sensed at the first source line coupled to the second memory cells by the column decoders and drivers 1152 and 1162 can represent a sum-of-products of the input x by respective weight factors WS in the column of second memory cells coupled to the common second bit line.
Rows of composite memory units share common first word lines (e.g., common first word lines 1210 and 1212) coupling the first memory cells in the rows to the row decoder and driver circuitry 1225. Rows of composite memory units also share common second word lines (e.g., common word lines 1214 and 1216) coupling the second memory cells and third memory cells in the rows to the row decoder and driver circuitry 1225. The row decoder and driver circuitry 1225 are also configured to assert data transfer enable signals to first data path control switches and second data path control switches in rows of composite memory units through common conducting paths (e.g., common conducting paths 1230, 1231, 1232 and 1233).
Columns of composite memory units share common first bit lines (e.g., common first bit lines 1218 and 1220), common first bit complement lines (e.g., common first bit complement lines 1250 and 1252), common second bit lines (e.g., common second bit lines 1222 and 1224), common third bit lines (e.g., common third bit lines 1254 and 1256), common first source lines (e.g., common first source lines 1226 and 1228) and common second source lines (e.g., common first source lines 1258 and 1260). The common first bit lines and the common first bit complement lines couple the first memory cells to the column decoders/drivers 1272, 1282. The common second bit lines and the common first source lines couple the second memory cells to the column decoders/drivers 1272, 1282. The common third bit lines and the common second source lines couple the third memory cells to the column decoders/drivers 1272, 1282.
Input/output circuits 1393 receive input data from sources external to the in-memory computing device 1300. The input/output circuits 1393 also drive output data to destinations external to the in-memory computing device 1300. Input/output data and control signals are moved via data bus 1305 between the input/output circuits 1393, the controller 1304 and input/output ports on the in-memory computing device 1300 or other data sources internal or external to the in-memory computing device 1300, such as a general purpose processor or special purpose application circuitry, or a combination of modules providing system-on-a-chip functionality supported by the array of composite memory units 1302. Buffer circuits 1390 can be coupled to the input/output circuits 1393 and the controller 1304 to store input/output data and control signals.
The controller 1304 can include circuits for selectively applying program voltages, such as row select voltages, activating voltages and data transfer enable signals, to the first and second memory cells in the array of composite memory units 1302 in response to the input data and control signals in the buffer circuits 1390. In the example shown in
The controller 1304 can be implemented using special-purpose logic circuitry as known in the art. In alternative embodiments, the controller 1304 comprises a general-purpose processor, which can be implemented on the same integrated circuit, which executes a computer program to control the operations of the device. In yet other embodiments, a combination of special-purpose logic circuitry and a general-purpose processor can be utilized for implementation of the controller 1304. A bias arrangement state machine 1312 controls the biasing arrangement supply voltages as described herein.
A number of flowcharts illustrating logic executed by a memory controller or in-memory computing devices are described herein. The logic can be implemented using processors programmed using computer programs stored in memory accessible to the computer systems and executable by the processors, by dedicated logic hardware, including field programmable integrated circuits, and by combinations of dedicated logic hardware and computer programs. With all flowcharts herein, it will be appreciated that many of the steps can be combined, performed in parallel or performed in a different sequence without affecting the functions achieved. In some cases, as the reader will appreciate, a rearrangement of steps will achieve the same results only if certain other changes are made as well. In other cases, as the reader will appreciate, a re-arrangement of steps will achieve the same results only if certain conditions are satisfied. Furthermore, it will be appreciated that the flow charts herein show only steps that are pertinent to an understanding of the disclosure, and it will be understood that numerous additional steps for accomplishing other functions can be performed before, after and between those shown.
While the present disclosure is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the disclosure and the scope of the following claims.
Number | Name | Date | Kind |
---|---|---|---|
4219829 | Dorda et al. | Aug 1980 | A |
4987090 | Hsu et al. | Jan 1991 | A |
5029130 | Yeh | Jul 1991 | A |
5586073 | Hiura et al. | Dec 1996 | A |
6107882 | Gabara et al. | Aug 2000 | A |
6313486 | Kencke et al. | Nov 2001 | B1 |
6829598 | Milev | Dec 2004 | B2 |
6906940 | Lue | Jun 2005 | B1 |
6960499 | Nandakumar et al. | Nov 2005 | B2 |
7089218 | Visel | Aug 2006 | B1 |
7368358 | Ouyang et al. | May 2008 | B2 |
7436723 | Rinerson | Oct 2008 | B2 |
7747668 | Nomura et al. | Jun 2010 | B2 |
8203187 | Lung et al. | Jun 2012 | B2 |
8275728 | Pino | Sep 2012 | B2 |
8432719 | Lue | Apr 2013 | B2 |
8589320 | Breitwisch et al. | Nov 2013 | B2 |
8630114 | Lue | Jan 2014 | B2 |
8725670 | Visel | May 2014 | B2 |
8860124 | Lue et al. | Oct 2014 | B2 |
9064903 | Mitchell et al. | Jun 2015 | B2 |
9147468 | Lue | Sep 2015 | B1 |
9213936 | Visel | Dec 2015 | B2 |
9379129 | Lue et al. | Jun 2016 | B1 |
9391084 | Lue | Jul 2016 | B2 |
9430735 | Vali et al. | Aug 2016 | B1 |
9431099 | Lee et al. | Aug 2016 | B2 |
9524980 | Lue | Dec 2016 | B2 |
9535831 | Jayasena | Jan 2017 | B2 |
9536969 | Yang et al. | Jan 2017 | B2 |
9589982 | Cheng et al. | Mar 2017 | B1 |
9698156 | Lue | Jul 2017 | B2 |
9698185 | Chen et al. | Jul 2017 | B2 |
9710747 | Kang et al. | Jul 2017 | B2 |
9747230 | Han | Aug 2017 | B2 |
9754953 | Tang et al. | Sep 2017 | B2 |
9767028 | Cheng | Sep 2017 | B2 |
9898207 | Kim | Feb 2018 | B2 |
9910605 | Jayasena | Mar 2018 | B2 |
9978454 | Jung | May 2018 | B2 |
9983829 | Ravimohan | May 2018 | B2 |
9991007 | Lee et al. | Jun 2018 | B2 |
10037167 | Kwon | Jul 2018 | B2 |
10056149 | Yamada | Aug 2018 | B2 |
10073733 | Jain et al. | Sep 2018 | B1 |
10157012 | Kelner | Dec 2018 | B2 |
10175667 | Bang | Jan 2019 | B2 |
10242737 | Lin et al. | Mar 2019 | B1 |
10528643 | Choi et al. | Jan 2020 | B1 |
10534840 | Petti | Jan 2020 | B1 |
10643713 | Louie et al. | May 2020 | B1 |
20030122181 | Wu | Jul 2003 | A1 |
20050287793 | Blanchet et al. | Dec 2005 | A1 |
20100182828 | Shima et al. | Jul 2010 | A1 |
20100202208 | Endo et al. | Aug 2010 | A1 |
20110063915 | Tanaka et al. | Mar 2011 | A1 |
20110106742 | Pino | May 2011 | A1 |
20110128791 | Chang et al. | Jun 2011 | A1 |
20110286258 | Chen et al. | Nov 2011 | A1 |
20110297912 | Samachisa et al. | Dec 2011 | A1 |
20120007167 | Hung et al. | Jan 2012 | A1 |
20120044742 | Narayanan | Feb 2012 | A1 |
20120182801 | Lue | Jul 2012 | A1 |
20120235111 | Osano et al. | Sep 2012 | A1 |
20120254087 | Visel | Oct 2012 | A1 |
20130070528 | Maeda | Mar 2013 | A1 |
20130075684 | Kinoshita et al. | Mar 2013 | A1 |
20140063949 | Tokiwa | Mar 2014 | A1 |
20140119127 | Lung et al. | May 2014 | A1 |
20140149773 | Huang et al. | May 2014 | A1 |
20140268996 | Park | Sep 2014 | A1 |
20140330762 | Visel | Nov 2014 | A1 |
20150008500 | Fukumoto et al. | Jan 2015 | A1 |
20150171106 | Suh | Jun 2015 | A1 |
20150199126 | Jayasena | Jul 2015 | A1 |
20150331817 | Han | Nov 2015 | A1 |
20160141337 | Shimabukuro et al. | May 2016 | A1 |
20160181315 | Lee et al. | Jun 2016 | A1 |
20160232973 | Jung | Aug 2016 | A1 |
20160247579 | Ueda et al. | Aug 2016 | A1 |
20160308114 | Kim et al. | Oct 2016 | A1 |
20160336064 | Seo et al. | Nov 2016 | A1 |
20160358661 | Vali et al. | Dec 2016 | A1 |
20170003889 | Kim | Jan 2017 | A1 |
20170025421 | Sakakibara et al. | Jan 2017 | A1 |
20170092370 | Harari | Mar 2017 | A1 |
20170123987 | Cheng | May 2017 | A1 |
20170148517 | Harari | May 2017 | A1 |
20170160955 | Jayasena | Jun 2017 | A1 |
20170169885 | Tang | Jun 2017 | A1 |
20170169887 | Widjaja | Jun 2017 | A1 |
20170263623 | Zhang et al. | Sep 2017 | A1 |
20170270405 | Kurokawa | Sep 2017 | A1 |
20170309634 | Noguchi et al. | Oct 2017 | A1 |
20170316833 | Ihm et al. | Nov 2017 | A1 |
20170317096 | Shin et al. | Nov 2017 | A1 |
20170337466 | Bayat et al. | Nov 2017 | A1 |
20180121790 | Kim et al. | May 2018 | A1 |
20180129424 | Confalonieri | May 2018 | A1 |
20180157488 | Shu et al. | Jun 2018 | A1 |
20180173420 | Li | Jun 2018 | A1 |
20180240522 | Jung | Aug 2018 | A1 |
20180246783 | Avraham | Aug 2018 | A1 |
20180286874 | Kim et al. | Oct 2018 | A1 |
20180342299 | Yamada | Nov 2018 | A1 |
20190035449 | Saida et al. | Jan 2019 | A1 |
20190043560 | Sumbul et al. | Feb 2019 | A1 |
20190065151 | Chen et al. | Feb 2019 | A1 |
20190102170 | Chen et al. | Apr 2019 | A1 |
20190148393 | Lue | May 2019 | A1 |
20190213234 | Bayat et al. | Jul 2019 | A1 |
20190220249 | Lee et al. | Jul 2019 | A1 |
20190244662 | Lee et al. | Aug 2019 | A1 |
20190286419 | Lin et al. | Sep 2019 | A1 |
20200026993 | Otsuka | Jan 2020 | A1 |
20200065650 | Tran et al. | Feb 2020 | A1 |
20200110990 | Harada et al. | Apr 2020 | A1 |
20200160165 | Sarin | May 2020 | A1 |
Number | Date | Country |
---|---|---|
1998012 | Nov 2010 | CN |
105718994 | Jun 2016 | CN |
105789139 | Jul 2016 | CN |
106530210 | Mar 2017 | CN |
2048709 | Apr 2009 | EP |
201523838 | Jun 2015 | TW |
201618284 | May 2016 | TW |
201639206 | Nov 2016 | TW |
201732824 | Sep 2017 | TW |
201741943 | Dec 2017 | TW |
201802800 | Jan 2018 | TW |
201822203 | Jun 2018 | TW |
2012009179 | Jan 2012 | WO |
2012015450 | Feb 2012 | WO |
2016060617 | Apr 2016 | WO |
2017091338 | Jun 2017 | WO |
2018201060 | Nov 2018 | WO |
Entry |
---|
Webopedia, “volatile memory”, Oct. 9, 2017, pp. 1-4, https://web.archive.org/web/20171009201852/https://www.webopedia.com/TERM/V/volatile_memory.html (Year: 2017). |
Webopedia, “DRAM—dynamic random access memory”, Jan. 21, 2017, pp. 1-3, https://web.archive.org/web/20170121124008/https://www.webopedia.com/TERM/D/DRAM.html (Year: 2017). |
Scott Thornton, “What is DRAm (Dynamic Random Access Memory) vs SRAM?”, Jun. 22, 2017, pp. 1-11, https://www.microcontrollertips.com/dram-vs-sram/ (Year: 2017). |
Anonymous, “Data In The Computer”, May 11, 2015, pp. 1-8, https://web.archive.org/web/20150511143158/https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading02.htm (Year: 2015). |
Rod Nussbaumer, “How is data transmitted through wires in the computer?”, Aug. 27, 2015, pp. 1-3, https://www.quora.com/How-is-data-transmitted-through-wires-in-the-computer (Year: 2015). |
Webopedia, “SoC”, Oct. 5, 2011, pp. 1-2, https://web.archive.org/web/20111005173630/https://www.webopedia.com/TERM/S/SoC.html (Year: 2011). |
Jang et al., “Vertical cell array using TCAT(Terabit Cell Array Transistor) technology for ultra high density NAND flash memory,” 2009 Symposium on VLSI Technology, Honolulu, HI, Jun. 16-18, 2009, pp. 192-193. |
Kim et al. “Multi-Layered Vertical Gate NAND Flash Overcoming Stacking Limit for Terabit Density Storage,” 2009 Symposium on VLSI Technology Digest of Papers, Jun. 16-18, 2009, 2 pages. |
Kim et al. “Novel Vertical-Stacked-Array-Transistor (VSAT) for Ultra-High-Density and Cost-Effective NAND Flash Memory Devices and SSD (Solid State Drive)”, Jun. 2009 Symposium on VLSI Technolgy Digest of Technical Papers, pp. 186-187. (cited in parent). |
Lue et al., “A Novel 3D AND-type NVM Architecture Capable of High-density, Low-power In-Memory Sum-of-Product Computation for Artificial Intelligence Application,” IEEE VLSI, Jun. 18-22, 2018, 2 pages. |
Ohzone et al., “Ion-Implanted Thin Polycrystalline-Silicon High-Value Resistors for High-Density Poly-Load Static RAM Applications,” IEEE Trans. on Electron Devices, vol. ED-32, No. 9, Sep 1985, 8 pages. |
-Sakai et al., “A Buried Giga-Ohm Resistor (BGR) Load Static RAM Cell,” IEEE Symp. on VLSI Technology, Digest of Papers, Sep. 10-12, 1984, 2 pages. |
Schuller et al., “Neuromorphic Computing: From Materials to Systems Architecture,” US Dept. of Energy, Oct. 29-30, 2015, Gaithersburg, MD, 40 pages. |
Seo et al., “A Novel 3-D Vertical FG NAND Flash Memory Cell Arrays Using the Separated Sidewall Control Gate (S-SCG) for Highly Reliable MLC Operation,” 2011 3rd IEEE International Memory Workshop (IMW), May 22-25, 2011, 4 pages. |
Soudry, et al. “Hebbian learning rules with memristors,” Center for Communication and Information Technologies CCIT Report #840, Sep. 1, 2013, 16 pages. |
Tanaka H., et al., “Bit Cost Scalable Technology with Punch and Plug Process for Ultra High Density Flash Memory,” 2007 Symp. VLSI Tech., Digest of Tech. Papers, pp. 14-15. |
U.S. Appl. No. 15/873,369, filed Jan. 17, 2018, entitled “Sum-Of-Products Accelerator Array,” Lee et al., 52 pages. |
U.S. Appl. No. 15/887,166, filed Feb. 2, 2018, entitled “Sum-Of-Products Array for Neuromorphic Computing System,” Lee et al., 49 pages. |
U.S. Appl. No. 15/922,359, filed Mar. 15, 2018, entitled “Voltage Sensing Type of Matrix Multiplication Method for Neuromorphic Computing System, ” Lin et al., 40 pages. |
Whang, SungJin et al. “Novel 3-dimensional Dual Control-gate with Surrounding Floating-gate (DC-SF) NAND flash cell for 1Tb file storage application,” 2010 IEEE Int'l Electron Devices Meeting (IEDM), Dec. 6-8, 2010, 4 pages. |
U.S. Office Action in U.S. Appl. No. 16/233,414 dated Apr. 20, 2020, 17 pages. |
Chen et al., “Eyeriss: An Energy-Efficient reconfigurable accelerator for deep convolutional neural networks,” IEEE ISSCC, Jan. 31-Feb. 4, 2016, 3 pages. |
EP Extended Search Report from EP19193290.4 dated Feb. 14, 2020, 10 pages. |
Gonugondla et al., “Energy-Efficient Deep In-memory Architecture for NAND Flash Memories,” IEEE International Symposium on Circuits and Systems (ISCAS), May 27-30, 2018, 5 pages. |
Jung et al, “Three Dimensionally Stacked NAND Flash Memory Technology Using Stacking Single Crystal Si Layers on ILD and TANOS Structure for Beyond 30nm Node,” International Electron Devices Meeting, 2006. IEDM '06, Dec. 11-13, 2006, pp. 1-4. |
Lai et al., “A Multi-Layer Stackable Thin-Film Transistor (TFT) NAND-Type Flash Memory,” Electron Devices Meeting, 2006, IEDM '06 International, Dec. 11-13, 2006, pp. 1-4. |
TW Office Action from TW Application No. 10820980430, dated Oct. 16, 2019, 6 pages (with English Translation). |
U.S. Office Action in U.S. Appl. No. 15/873,369 dated Dec. 4, 2019, 5 pages. |
U.S. Office Action in U.S. Appl. No. 15/922,359 dated Oct. 11, 2019, 7 pages. |
U.S. Office Action in U.S. Appl. No. 16/233,414 dated Oct. 31, 2019, 22 pages. |
U.S. Office Action in related case U.S. Appl. No. 16/037,281 dated Dec. 19, 2019, 9 pages. |
U.S. Office Action in related case U.S. Appl. No. 16/297,504 dated Feb. 4, 2020, 15 pages. |
Wang et al., “Three-Dimensional NAND Flash for Vector-Matrix Multiplication,” IEEE Trans. on Very Large Scale Integration Systems (VLSI), vol. 27, No. 4, Apr. 2019, 4 pages. |
TW Office Action from TW Application No. 10920683760, dated Jul. 20, 2020, 4 pages. |
U.S. Office Action in U.S. Appl. No. 16/233,404 dated Jul. 30, 2020, 20 pages. |
Chen et al., “A Highly Pitch Scalable 3D Vertical Gate (VG) NAND Flash Decoded by a Novel Self-Aligned Independently Controlled Double Gate (IDG) StringSelect Transistor (SSL),” 2012 Symp. on VLSI Technology (VLSIT), Jun. 12-14, 2012, pp. 91-92. |
Choi et al., “Performance Breakthrough in NOR Flash Memory With Dopant-Segregated Schottky-Barrier (DSSB) SONOS Devices,” Jun. 2009 Symposium on VLSITechnology Digest of Technical Papers, pp. 222-223. |
Fukuzumi et al. “Optimal Integration and Characteristics of Vertical Array Devices for Ultra-High Density, Bit-Cost Scalable Flash Memory,” IEEE Dec. 2007, pp. 449-452. |
Hsu et al., “Study of Sub-30nm Thin Film Transistor (TFT) Charge-Trapping (CT) Devices for 3D NAND Flash Application,” 2009 IEEE, Dec. 7-9, 2009, pp. 27.4.1-27.4.4. |
Hubert et al., “A Stacked SONOS Technology, Up to 4 Levels and 6nm Crystalline Nanowires, With Gate-All-Around or Independent Gates (Flash), Suitable for Full 3D Integration,” IEEE 2009, Dec. 7-9, 2009, pp. 27.6.1-27.6.4. |
Hung et al., “A highly scalable vertical gate (Vg) 3D Nand Flash with robust program disturb immunity using a novel PN diode decoding structure,” 2011 Symp. on VLSI Technology (VLSIT), Jun. 14-16, 2011, pp. 68-69. |
Katsumata et al., “Pipe-shaped BiCS Flash Memory With 16 Stacked Layers and Multi-Level-Cell Operation for UltraHigh Density Storage Devices,” 2009 Symposium on VLSI Technology Digest of Technical Papers, Jun. 16-18, 2009, pp. 136-137. |
Kim et al., “Novel 3-D Structure for Ultra High Density Flash Memory with VRAT (Vertical-Recess-Array-Transistor) and PIPE (Planarized Integration on the same PlanE),” IEEE 2008 Symposium on VLSI Technology Digest of Technical Papers, Jun. 17-19, 2008, pp. 122-123. |
Kim et al., “Three-Dimensional Nand Flash Architecture Design Based on Single-Crystalline STacked ARray,” IEEE Transactions on Electron Devices, vol. 59, No. 1, pp. 35-45, Jan. 2012. |
Lue et al., “A Highly Scalable 8-Layer 3D Vertical-Gate (VG) TFT NAND Flash Using Junction-Free Buried Channel BE-SONOS Device”, 2010 Symposium on VLSI Technology Digest of Technical Papers, pp. 131-132, Jun. 15-17, 2010. |
Nowak et al., “Intrinsic fluctuations in Vertical NAND flash memories,” 2012 Symposium on VLSI Technology, Digest of Technical Papers, pp. 21-22, Jun. 12-14, 2012. |
TW Office Action from TW Application No. 10820907820, dated Sep. 22, 2020, 41 pages. |
Wang, Michael, “Technology Trends on 3D-NAND Flash Storage”, Impact 2011, Taipei, dated Oct. 20, 2011, found at www.impact.org.tw/2011/Files/NewsFile/201111110190.pdf. |
Y.X. Liu et al., “Comparative Study of Tri-Gate and Double-Gate-Type Poly-Si Fin-Channel Spli-Gate Flash Memories,” 2012 IEEE Silicon Nanoelectronics Workshop (SNW), Honolulu, HI, Jun. 10-11, 2012, pp. 1-2. |
Number | Date | Country | |
---|---|---|---|
20200264790 A1 | Aug 2020 | US |