Non-volatile computing method in flash memory

Information

  • Patent Grant
  • 11132176
  • Patent Number
    11,132,176
  • Date Filed
    Wednesday, March 20, 2019
    5 years ago
  • Date Issued
    Tuesday, September 28, 2021
    2 years ago
Abstract
An in-memory multiply and accumulate circuit includes a memory array, such as a NOR flash array, storing weight values Wi,n. A row decoder is coupled to the set of word lines, and configured to apply word line voltages to select word lines in the set. Bit line bias circuits produce bit line bias voltages for the respective bit lines as a function of input values Xi,n on the corresponding inputs. Current sensing circuits are connected to receive currents in parallel from a corresponding multimember subset of bit lines in the set of bit lines, and to produce an output in response to a sum of currents.
Description
BACKGROUND
Field

The present invention relates to circuitry that can be used to perform in-memory computation, such as multiply and accumulate or other sum-of-products like operations.


Description of Related Art

In neuromorphic computing systems, machine learning systems and circuitry used for some types of computations based on linear algebra, the multiply and accumulate or sum-of-products functions can be an important component. Such functions can be expressed as follows:







f


(

X
i

)


=




i
=
1

M




W
i



X
i







In this expression, each product term is a product of a variable input Xi and a weight Wi. The weight Wi can vary among the terms, corresponding for example to coefficients of the variable inputs Xi.


The sum-of-products function can be realized as a circuit operation using cross-point array architectures in which the electrical characteristics of cells of the array effectuate the function. One problem associated with large computations of this type arises because of the complexity of the data flow among memory locations used in the computations which can involve large tensors of input variables and large numbers of weights.


It is desirable to provide structures for sum-of-products operations suitable for implementation in-memory, to reduce the number of data movement operations required.


SUMMARY

A technology for in-memory multiply and accumulate functions is described. In one aspect, the technology provides a method using an array of memory cells, such as NOR flash architecture memory cells.


One method described includes programming M memory cells in a row of the array on a particular word line WLn, and on a plurality of bit lines BLi, for i going from 0 to M−1, with values Wi,n, for i going from 0 to M−1, or accessing already programmed memory cells by for example controlling a row decoder to select a word line for a particular row of programmed cells. The values Wi,n, can correspond with weights, or coefficients, of the terms in a sum-of-products or multiply and accumulate function that uses the cells on word line WLn, and bit line BLi. The values Wi,n can be based on multiple bits per cell. In NOR flash memory embodiments, the values Wi,n correspond with threshold values of the memory cells. Also, this method includes biasing the bit lines BLi, with input values Xi,n, respectively, for i going from 0 to M−1 for the cells on the word line WLn. The input values can be analog bias voltages that are generated using a digital-to-analog converter in response to multibit digital input signals for each term of the sum-of-products function. This method includes applying a word line voltage to the particular word line WLn so that the memory cells on the row conduct current corresponding to a product Wi,n*Xi,n from respective cells in the row. The currents conducted by the cells in the row represent respective terms of a sum-of-products function, and are summed to produce an output current representing a sum of the terms. The output current is sensed to provide the result of the in-memory computation of the sum-of-products function.


In some embodiments, the row of memory cells in an array can be configured into P sets of M cells each, and M is greater than 1. The output current from each of the P sets of M cells can be summed in parallel.


In some embodiments, multiple rows of the array can be programmed or accessed, and results computed for each row in sequence according to control circuitry and commands applied to configure the operation. Also, in some embodiments, multiple rows of the array can be programmed or accessed in a single sensing operation, and results computed for each bit line according to control circuitry and commands applied to configure the operation.


Also, an in-memory multiply and accumulate circuit is described. In an example described herein, the circuit includes a memory array including memory cells on a set of word lines and a set of bit lines, such as a NOR flash array, storing respective weight values Wi,n. A row decoder is coupled to the set of word lines, and configured to apply word line voltages to select word lines in the set. A plurality of bit line bias circuits is included. Bit line bias circuits have corresponding inputs connected to an input data path, and have outputs connected to respective bit lines in the set of bit lines. The bit line bias circuits produce bit line bias voltages for the respective bit lines as a function of input values Xi,n on the corresponding inputs. A circuit includes a plurality of current sensing circuits, each of the plurality of current sensing circuits is connected to receive currents in parallel from a corresponding multimember subset of bit lines in the set of bit lines, and to produce an output in response to a sum of currents from the corresponding multi-member subset of bit lines. In some embodiments, the multimember subset of bit lines can be the entire set of bit lines. In other embodiments, the circuit can include a plurality of multimember subsets usable in parallel.


In other embodiments, a row decoder is coupled to the set of word lines, and configured to apply word line voltages to select a plurality of word lines in the set to access a plurality of memory cells in parallel. A plurality of bit line bias circuits is included. Bit line bias circuits have corresponding inputs connected to an input data path, and have outputs connected to respective bit lines in the set of bit lines. The bit line bias circuits produce bit line bias voltages for the respective bit lines as a function of input values Xi,n on the corresponding inputs. A circuit includes a plurality of current sensing circuits, each of the plurality of current sensing circuits is connected directly or via a switch, to receive current from a selected one of the bit lines, and to produce an output in response to a sum of currents from the corresponding plurality of memory cells on the selected bit line.


In some embodiments, the bit line bias circuits can comprise digital-to-analog DAC converters.


Also, in one circuit described herein, some or all of the memory cells in the array are connected between corresponding bit lines and a common reference line, which can be referred to in connection with a NOR flash array as a common source line. A source line bias circuit can be connected to the common source line, and to the bit line bias circuits to compensate for variations in the voltage on the common source line.


Other aspects and advantages of the present invention can be seen on review of the drawings, the detailed description and the claims, which follow.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a simplified diagram of an in-memory sum-of-products circuit according to embodiments described herein.



FIG. 2 is a simplified diagram of an alternative implementation of an in-memory sum-of-products circuit according to embodiments described herein.



FIG. 3 is a more detailed diagram of a sum-of-products circuit according to embodiments described herein.



FIG. 4 is a graph showing distributions of threshold voltages which correspond to weights or coefficients stored in memory cells in embodiments described herein.



FIG. 5 is a diagram of an example digital-to-analog converter usable as a bit line bias circuit in embodiments described herein.



FIG. 6 is a diagram of an example sense amplifier suitable for sensing current sums according to embodiments described herein.



FIG. 7 is a timing diagram showing operation of the sense amplifier of FIG. 6.



FIG. 8 is a logic table showing operation of the sense amplifier of FIG. 6.



FIG. 9 is a flowchart of an in-memory sum-of-products operation according to embodiments described herein.



FIG. 10 is a more detailed diagram of an alternative sum-of-products circuit according to embodiments described herein.





DETAILED DESCRIPTION

A detailed description of embodiments of the present invention is provided with reference to the FIGS. 1-10.



FIG. 1 illustrates an in-memory sum-of-products circuit. The circuit includes an array 10 of NOR flash cells. The array 10 includes a plurality of bit lines BL0 and BL1 and a plurality of word lines WL0 and WL1.


Memory cells in the array are disposed on the plurality of bit lines and the plurality of word lines. Each memory cell stores a weight W0,0; W1,0; W0,1; W1,1; which acts as a coefficient of a term of the sum-of-products function.


Word line circuits 11, which can include word line decoders and drivers are configured to apply word line voltages on selected word lines in support of the sum-of-products function.


Bit line circuits 12 include circuitry to bias each bit line in the plurality of bit lines with a bias voltage that corresponds to an input value for each term of the sum-of-products function, where inputs X0,n and X1,n correspond with the input value for bit line BL0 and bit line BL1 stored in cells on a particular word line WLn.


The outputs from the bit line circuits for bit lines BL0 and BL1 represented in FIG. 1 as I-cell 1 and I_cell 2 are coupled to a summing node 13 to produce an output current ISENSE for the plurality of cells. The output current ISENSE is connected to a sense amplifier 14 which outputs a value corresponding to the sum of the terms W0,n*X0,n+W1,n*X1,n.


Control circuits 15 are configured to execute operations to program the weights in the cells in the array, and to execute the sum-of-products operations. The programming can be implemented using state machines and logic circuits adapted for programming the particular type of memory cell in the array. In embodiments described herein, multibit programming, or multilevel programming, is utilized to store weights that can have 2-bit, 3-bit, 4-bit or more bit values or effectively an analog value. In support of programming, circuitry such as a page buffer, program voltage generators, and program pulse and verify sequence logic can be included.


The control circuits 15 in support of executing sum-of-products operations, can include a sequencer or decoder that selects word lines corresponding to the rows of weights to be used in a particular cycle of calculation. In one example, a sequence of computations can be executed by applying word line voltages to the word lines in the array in sequence to access corresponding rows of cells, while input values corresponding to each row are applied in parallel for each sequence on the bit line circuits. The sum of products computation can comprise a sum of currents in one or more selected memory cells on a plurality of bit lines, or in other embodiments, a sum of currents in a plurality of memory cells on one bit line.


The control circuits 15 can also include logic for controlling the timing and functioning of the sense amplifier 14, for generating multibit outputs in response to the output current ISENSE.


In the example illustrated by FIG. 1, the memory cells in the array can include charge storage memory cells, such as floating gate cells or dielectric charge trapping cells, having drain terminals coupled to corresponding bit lines, and source terminals coupled to ground. Other types of memory cells can be utilized in other embodiments, including but not limited to many types of programmable resistance memory cells like phase change based memory cells, magnetoresistance based memory cells, metal oxide based memory cells, and others.



FIG. 2 illustrates an alternative embodiment, in which components corresponding to components of FIG. 1 have like reference numerals and are not described again. In the embodiment of FIG. 2, the memory cells in the array 20 are coupled between corresponding bit lines and a common source line 21. The common source line 21 is coupled to a source bias control circuit 22, which is also connected to the bit line circuits 12. The source bias control circuit 22 provides a feedback signal on line 23 to the bit line circuits based on variations in the voltage on the common source line 21. The bit line circuits 12 can adjust the level of bias voltages applied to the bit lines by the bit line circuits 12 in response to the feedback signal on line 23. This can be used to compensate for a source crowding effect. If the source voltage on the common source line increases, a corresponding increase in the bit line bias voltages can be induced.



FIG. 3 illustrates an in-memory sum-of-products circuit including an expanded array of memory cells, such as NOR flash memory cells. The expanded array includes a plurality of blocks (e.g. 50, 51) of memory cells. In this example, the array includes 512 word lines WL0 to WL511, where each block of memory cells includes 32 rows. Thus, block 50 includes memory cells on word lines WL0 to WL31, and block 51 includes memory cells on word lines WL480 to WL511. Also, in this example, the array includes 512 bit lines BL0 to BL511.


Each block includes corresponding local bit lines that are coupled to the global bit lines BL0 to BL511 by corresponding block select transistors (e.g. 58, 59, 60, and 61) on block select lines BLT0 to BLT15.


A row decoder 55 (labeled XDEC) is coupled to the word lines, and is responsive to addressing or sequencing circuitry to select one or more word lines at a time in one or more blocks at a time as suits a particular operation. Also, the row decoder 55 includes word line drivers to apply word line voltages in support of the sum-of-products operation.


Each particular word line WLn is coupled to a row of memory cells in the array. The illustrated example, WLn is coupled to memory cells (e.g. 68, 69, 70, and 71). Each memory cell in the row corresponding to WLn is programmed with an analog or multibit value Wi,n, where the index i corresponds to the bit line or column in the array, and the index n corresponds to the word line or row in the array.


Each bit line is coupled to bit line bias circuits, including a corresponding bit line clamp transistor (75, 76, 77, and 78). The gate of each bit line clamp transistor is coupled to a corresponding digital-to-analog converter DAC (80, 81, 82, and 83). Each digital-to-analog converter has a digital input corresponding to the input variable X1,n, where the index i corresponds with the bit line number and the index n corresponds with the selected word line number. Thus, the input value on the digital-to-analog converter DAC 80 on bit line BL0 receives a digital input X0,n during the sum-of-products computation for the row corresponding to word line WLn. In other embodiments, the input variables can be applied by varying the block select line voltages connected to the block select transistors (e.g. 58, 59, 60, and 61) on block select lines BLT0 to BLT15. In this embodiment, the block select transistors are part of the bit line bias circuits.


In the illustrated example, the array includes a set of bit lines BL0 to BL511 which is arranged in 128 subsets of four bit lines each. The four bit lines for each subset are coupled through the corresponding bit line clamp transistors to a summing node (e.g. 85, 86), which is in turn coupled to a corresponding current sensing sense amplifier SA0 (90) and SA127 (91). The outputs of the sense amplifiers on lines 92, 93 are digital values representing the sum of the terms represented by the cells on the corresponding four bit lines on word line WLn. These digital values can be provided to a digital summing circuit to produce an output representing a 512 term sum-of-products computation, based on in-memory computation of 128 four-term sum-of-products computations.


In other embodiments, the number of bit lines in each subset can be any number, up to and including all of the bit lines in the set of bit lines in the array. The number of bit lines in each subset can be limited based on the range of the sense amplifiers utilized. The range of the sense amplifiers is a trade-off between a variety of factors including the complexity of the circuitry required, and the speed of operation required for a given implementation.


As mentioned above, each memory cell is programmed with a weight value Wi,n. In an example in which the memory cell is a flash cell, the weight value can be represented by a threshold voltage that is programmed by charge tunneling into the charge trapping structure of the cell. Multilevel programming or analog programming algorithms can be utilized, in which the power applied for the purposes of programming a value in the memory cell is adjusted according to the desired threshold voltage.



FIG. 4 illustrates a representative distribution of threshold voltages usable to store four different weight values in each cell. For example, the weight stored in a given cell can have a first value falling within the distribution 100 programmed to have a minimum threshold of 2.5 V, a second value falling within the distribution 101 programmed to have a minimum threshold of 3.5 V, a third value falling within distribution 102 programmed to have a minimum threshold of 4.5 V and a fourth value falling within distribution 103 program to have a minimum threshold of 5.5 V. In order to execute a sum-of-products operation for memory cells having weights within these ranges of thresholds, a word line voltage 104 of about 6.5 V can be applied. The current output of a memory cell receiving the word line voltage will be a function of the difference between the word line voltage and the threshold value stored in the cell, and the bit line bias voltage applied to the line by the bias voltage applied by the bit line circuits.


In some embodiments, the threshold voltages achieved during the programming operation can be implemented using an analog technique, which does not rely on minimum or maximum threshold levels for each programming operation, but rather relies on the power applied in one pulse or multiple pulses during the programming operation which might be determined based on analog or digital inputs.



FIG. 5 illustrates an example of a digital-to-analog converter which accepts a three-bit input value (X0,n) stored in a register 150. The output of the register 150 is coupled to a multiplexer 151, and selects one of the inputs to the multiplexer. For a three-bit input value, the multiplexer selects from among eight inputs. The inputs Q1 to Q8 in this example are generated by a resistor ladder 152. The resistor ladder 152 includes a current source implemented using an operational amplifier 153 having output driving the gate of a p-channel transistor 154. The source of the p-channel transistor 154 is coupled to the resistor ladder 152. The operational amplifier 153 has a first input coupled to a bandgap reference voltage BGREF which can be about 0.8 V for example, and a second input connected in feedback 155 to the source of the p-channel transistor 154. The output of the multiplexer 151 is coupled to an operational amplifier in a unity gain configuration, with an output connected to the gate of n-channel transistor 161, which has its source connected via resistor 162 to ground, and in feedback 163 to the second input of the operational amplifier 160.


The bit line circuits on each of the bit lines in an embodiment like that of FIG. 3 can have corresponding three-bit registers (e.g. 150) coupled to three-bit digital-to-analog converters. Of course converters of greater or lesser precision can be utilized as suits a particular implementation.



FIG. 6 illustrates an example of a sense amplifier in a circuit like that of FIG. 3. For example, a sense amplifier like that of FIG. 6 can be configured to sense currents over a range of about 4 μA to about 128 μA, and convert those values into a three-bit digital output Bit0, Bit1, Bit2.


In this diagram, the current ISENSE from an input summing node corresponding to the outputs from a subset of the bit lines in the array is represented by the current ISENSE 200. The current ISENSE 200 is coupled to a current sensing circuit having an input side including transistor set 210, 211, and a reference side including transistor set 212, 213.


The input side transistor set 210, 211 includes transistors MSB[2:0] 210 which have inputs connected to corresponding enable signals EN[2:0] which are asserted during a sensing sequence as described below. Also, the input side transistor set 210, 211 includes transistors MPB[2:0] 211 configured in series with corresponding transistors in the transistor set MSB[2:0] 210.


Reference side transistor set 212, 213 includes transistors MSA[2:0] 212 which have inputs connected to corresponding enable signals EN[2:0] which are asserted during a sensing sequence as described below. Also, the reference side transistor set 212, 213 includes current mirror reference transistors MPA[2:0] configured in series with corresponding transistors in the transistor set MSA[2:0]. The gates of the transistors MPA[2:0] are connected to the sources of the transistors, and in a current mirror configuration with the gates of the transistors MPB[2:0].


A reference current I-ref is applied to the reference side transistor set, and is generated using a reference current generator 220. The reference current generator 220 includes current source transistors 225, 226, 227 having their gates controlled by a reference voltage VREF. The current source transistors 225, 226, 227 are sized so as to produce respective currents 16 μA, 32 μA and 64 μA.


The outputs of the current source transistors 225, 226, 227 are connected to corresponding enable transistors 221, 222, 223, which are controlled respectively by control signals EN0, EN1 and EN2, which also control the transistors MSA[2:0] 212 and MSB[2:0] 210. The enable transistors 221, 222, 223 connect the current source transistors to the node 215, at which the current I-ref is provided.


The sense amplifier includes a sensing node 201, which fluctuates according to the difference between the current ISENSE 200 and the reference current I-ref, as adjusted by scaling of the current mirror transistors on the input side relative to the reference side. The sensing node 201 is connected to the D input of three latches 230, 231, 232. The latches 230, 231, 232 are clocked by the signals on the outputs of corresponding AND gates 240, 241, 242. The AND gates 240, 241, 242 receive as inputs the control signals sense2, sense1, sense0 respectively and a sensing clock signal clk. The outputs of the latches 230, 231, 232 provide the three bit output Bit0, Bit1, Bit2 of the sense amplifier. The outputs Bit1 and Bit2, where Bit2 is the most significant bit, are coupled to the control logic 235, which generates the control signals EN[2:0].



FIG. 7 illustrates a timing diagram for the circuit of FIG. 6. As can be seen, the control signals Sense0 to Sense2 are asserted in sequence. Upon assertion of the control signal Sense0, the enable signal EN2 is asserted. In this case, the reference current I-ref will be equal to the current through transistor 227 or 64 μA. The MSB latch 232 will store a bit indicating whether the current is above or below 64 μA.


Upon assertion of the control signal Sense1, both control signal EN2 and control signal EN1 are asserted if Bit2 is 1, corresponding to the value above 64 μA, and the control signal EN1 is not asserted if the Bit2 is zero. In the first case, this results in producing a current I-ref equal to the sum-of-currents from transistors 226 and 227, or 96 μA, in this example. In the second case, this results in producing a current I-ref equal to the current from transistor 226 alone, or 32 μA, in this example. The latch 231 will store a value Bit1 indicating whether the current is above or below 96 μA in the first case, or above or below 32 μA in the second case.


Upon assertion of the control signal Sense2, in the first case, all three control signals EN2, EN1 and EN0 are asserted in the case, illustrated resulting in a current I-ref equal to 112 μA, if both Bit2 and Bit1 are 1. If Bit1 is 0 (case Data=(1,0,x) not shown), then the control signal EN1 is not asserted resulting in a current I-ref equal to 80 μA.


Upon assertion of control signal Sense2 in the second case, only the control signal EN0 is asserted, resulting in a current I-ref equal to 16 μA if both Bit2 and Bit1 are zero. If Bit1 is 1 (case Data=(0,1,x) not shown), then both EN1 and EN0 are asserted, resulting in a current I-ref equal to 48 μA.


The table shown in FIG. 8 illustrates the logic, which can be executed by the control circuitry illustrated in FIG. 1 for example.



FIG. 9 is a flowchart showing a method for in-memory sum-of-product computation utilizing a memory array including a plurality of bit lines and a plurality of word lines, such as a NOR flash array.


The illustrated method includes for each row being utilized in a sum-of-products operation, programming a number P sets of memory cells in a row of the array, with M memory cells in each set, the P sets of memory cells on word line WLn and on bit lines BLi, for i going from 0 to P*M−1, with the values Wi,n, for i going from 0 to P*M−1 (300). Also, the method includes biasing the bit lines BLi, with values Xi,n, respectively, for i going from 0 to P*M−1 (301). To cause execution of a sum-of-products computation, the method includes applying a word line voltage to word line WLn so that the memory cells on the row conduct current corresponding to a product from respective cells in the row of Wi,n*Xi,n(302). While applying the word line voltage, the method includes summing the currents on the M bit lines connected to each of the P sets of memory cells, to produce P output currents (303). An output of the sum-of-products operation is produced by sensing the P output currents (304).


The flowchart in FIG. 9 illustrates logic executed by a memory controller or by in-memory sum-of-products device. The logic can be implemented using processors programmed using computer programs stored in memory accessible to the computer systems and executable by the processors, by dedicated logic hardware, including field programmable integrated circuits, and by combinations of dedicated logic hardware and computer programs. It will be appreciated that many of the steps can be combined, performed in parallel, or performed in a different sequence, without affecting the functions achieved. In some cases, as the reader will appreciate, a rearrangement of steps will achieve the same results only if certain other changes are made as well. In other cases, as the reader will appreciate, a rearrangement of steps will achieve the same results only if certain conditions are satisfied. Furthermore, it will be appreciated that the flow chart shows only steps that are pertinent to an understanding of the technology presented, and it will be understood that numerous additional steps for accomplishing other functions can be performed before, after and between those shown.



FIG. 10 illustrates an in-memory sum-of-products circuit including an expanded array of memory cells, such as NOR flash memory cells, configurable for applying input values on a plurality of word lines, and weights on bit line bias circuits, operable to sum the current from a plurality of cells on one bit line at a time. The expanded array includes a plurality of blocks (e.g. 550, 551) of memory cells. In this example, the array includes 512 word lines WL0 to WL511, where each block of memory cells includes 32 rows. Thus, block 550 includes memory cells on word lines WL0 to WL31, and block 551 includes memory cells on word lines WL480 to WL511. Also, in this example, the array includes 512 bit lines BL0 to BL511.


Each block includes corresponding local bit lines that are coupled to the global bit lines BL0 to BL511 by corresponding block select transistors (e.g. 558, 559, 560, and 561) on block select lines BLT0 to BLT15.


A row decoder 555 (labeled XDEC) is coupled to the word lines, and is responsive to addressing or sequencing circuitry to select a plurality of word lines at a time in one or more blocks at a time as suits a particular operation. Also, the row decoder 555 includes word line drivers to apply word line voltages in support of the sum-of-products operation.


Each particular word line WLn is coupled to a row of memory cells in the array. The illustrated example, WLn is coupled to memory cells (e.g. 568, 569, 570, and 571). Each memory cell in the row corresponding to WLn is programmed with an analog or multibit value Wi,n, where the index i corresponds to the bit line or column in the array, and the index n corresponds to the word line or row in the array.


Each bit line is coupled to bit line bias circuits, including a corresponding bit line clamp transistor (565,566, 567, and 568). The gate of each bit line clamp transistor is coupled to a corresponding digital-to-analog converter DAC (580, 581, 582, and 583). Each digital-to-analog converter has a digital input corresponding to the input variable Xi,n, where the index i corresponds with the bit line number and the index n corresponds with the selected word line number. Thus, the input value on the digital-to-analog converter DAC 580 on bit line BL0 receives a digital input X0,n during the sum-of-products computation for the row corresponding to word line WLn. In other embodiments, the input variables can be applied by varying the block select line voltages connected to the block select transistors (e.g. 558, 559, 560, and 561) on block select lines BLT0 to BLT15. In this embodiment, the block select transistors are part of the bit line bias circuits.


In the illustrated example, the array includes a set of bit lines BL0 to BL511 which is arranged in 128 subsets of four bit lines each. The four bit lines for each subset are coupled through the corresponding bit line clamp transistors to a switch (e.g. 585, 586), operable to select one bit line from the corresponding subset, and connect the selected bit line to a corresponding current sensing sense amplifier SA0 (590) and SA127 (591). The outputs of the sense amplifiers on lines 592, 593 are digital values representing the sum of the terms represented current in a plurality of cells on one selected bit line. These digital values can be provided to a digital summing circuit to produce an output representing a sum-of-products computation, based on in-memory computation of 128 sum-of-products computations. The switches can be operated to switch in sequence from bit line to bit line, to produce a sequence of digital outputs representing the sum of current on corresponding bit lines. In other embodiments, a sense amplifier can be connected to each bit line, and the switches 585, 586 may be eliminated.


In other embodiments, the number of bit lines in each subset can be any number, up to and including all of the bit lines in the set of bit lines in the array. The number of bit lines in each subset can be limited based on the range of the sense amplifiers utilized. The range of the sense amplifiers is a trade-off between a variety of factors including the complexity of the circuitry required, and the speed of operation required for a given implementation.


As mentioned above, each memory cell is programmed with a weight value Wi,n. In an example in which the memory cell is a flash cell, the weight value can be represented by a threshold voltage that is programmed by charge tunneling into the charge trapping structure of the cell. Multilevel programming or analog programming algorithms can be utilized, in which the power applied for the purposes of programming a value in the memory cell is adjusted according to the desired threshold voltage.


While the present invention is disclosed by reference to the preferred embodiments and examples detailed above, it is to be understood that these examples are intended in an illustrative rather than in a limiting sense. It is contemplated that modifications and combinations will readily occur to those skilled in the art, which modifications and combinations will be within the spirit of the invention and the scope of the following claims.

Claims
  • 1. A method for performing an in-memory multiply-and-accumulate function, using an array of memory cells, comprising: applying a word line voltage to word line WLn to access M memory cells in a row of the array on bit lines BLi, for i going from 0 to M-1, the M memory cells storing values Wi,n, for i going from 0 to M-1;biasing bit lines BLi, with input values Xi,n, respectively, for i going from 0 to M-1, so that individual ones of the M memory cells on word line WLn conduct current indicative of product of Wi,n and Xi,n, for i going from 0 to M-1, wherein biasing the bit lines BLi includes (i) converting, using corresponding digital-to-analog converters, digital inputs Xi,n to analog bias voltages and (ii) applying the analog bias voltages to corresponding bit line clamp transistors to generate corresponding bit line voltages as a function of the digital inputs Xi,n for the respective bit lines BLi;summing the currents from a plurality of memory cells to produce an output current; andsensing the output current.
  • 2. The method of claim 1, wherein summing the currents from the plurality of memory cells includes summing the currents on the bit lines BLi, for i going from 0 to M-1.
  • 3. The method of claim 1, wherein summing the currents from the plurality of memory cells includes applying word line voltage to a plurality of word lines in parallel so that the current on one of the bit lines BLi is the output current including currents from the plurality of memory cells.
  • 4. The method of claim 1, wherein the memory cells comprise multilevel non-volatile memory cells.
  • 5. The method of claim 1, including: applying a word line voltage to word line WLn to access a number P sets of memory cells in a row of the array, with M memory cells in each set, the P sets of memory cells on word line WLn and on bit lines BLi, for i going from 0 to P*M−1, storing values Wi,n, for i going from 0 to P*M−1, one of said P sets including said first mentioned M memory cells;biasing the bit lines BLi, with values Xi,n, respectively, for i going from 0 to P*M−1, so that the memory cells on the row of the array conduct current corresponding to a product from respective memory cells in the row of Wi,n*Xi,n;summing the currents on the M bit lines connected to each of the P sets of memory cells, to produce P output currents; andsensing the P output currents.
  • 6. The method of claim 1, including programming the memory cells on the row with the weights Wi,n.
  • 7. The method of claim 1, including adjusting the biasing on the bit lines BLi, for i going from 0 to M-1, in response to variations in voltage on a source line coupled to at least part of the array of memory cells.
  • 8. An in-memory multiply-and-accumulate circuit, comprising: a memory array including memory cells on a set of word lines and a set of bit lines;a row decoder coupled to the set of word lines, configured to apply word line voltages to one or more selected word lines in the set;a plurality of bit line bias circuits, individual ones of the bit line bias circuits in the plurality of bit line bias circuits having corresponding inputs connected to an input data path, and having outputs connected to respective bit lines in the set of bit lines, and producing bit line bias voltages for the respective bit lines as a function of input values on the corresponding inputs;a plurality of current sensing circuits, each of the plurality of current sensing circuits being connected to receive currents from one or more bit lines in the set of bit lines, and to produce an output in response to a sum-of-the-currents from a corresponding plurality of memory cells; andcircuits to program a first memory cell in the array with a weight W,wherein a first bit line bias circuit of the plurality of bit line bias circuits includes (i) a digital-to-analog converter to convert a multiple-bit digital input X to an analog bias voltage, and (ii) a transistor to receive the analog bias voltage at a gate of the transistor and to generate a bit line voltage that is output to a first bit line corresponding to the first memory cell, andwherein the first memory cell is to conduct a current indicative of a product of the weight W and the input X.
  • 9. The circuit of claim 8, wherein multi-member subsets of the set of bit lines are connected to a current summing node, and current sensing circuits in the plurality of current sensing circuits are connected to respective summing nodes.
  • 10. The circuit of claim 8, wherein current sensing circuits in the plurality of current sensing circuits are configured to sense current from one of the bit lines in the set of bit lines, while the row decoder applies word line voltages to a plurality of word lines in parallel so that the current on one of the bit lines includes currents from the plurality of memory cells.
  • 11. The circuit of claim 8, wherein the individual ones of the bit line bias circuits in the plurality of bit line bias circuits comprise digital-to-analog converters.
  • 12. The circuit of claim 8, including circuits to program the memory cells in the array with weights Wi,n, in memory cells in a row of the array on word line WLn in the set of word lines and on bit lines BLi in the set of bit lines to store values.
  • 13. The circuit of claim 8, wherein the memory cells comprise multilevel non-volatile memory cells.
  • 14. The circuit of claim 8, wherein the plurality of bit line bias circuits include digital-to-analog converters to convert multiple-bit digital inputs Xi,n to analog bias voltages, and to apply the analog bias voltages to the respective bit lines BLi.
  • 15. The circuit of claim 8, wherein the memory array has a NOR architecture.
  • 16. The circuit of claim 15, wherein the memory array comprises dielectric charge trapping memory cells.
  • 17. The circuit of claim 8, wherein the plurality of bit lines are configured in P sets of bit lines having M members each, where M is greater than one, and the plurality of sensing circuits are connected to corresponding sets in the P sets of bit lines.
  • 18. The circuit of claim 8, wherein the memory array of memory cells comprises charge trapping memory cells in a NOR architecture having a source line coupled to at least some of the memory cells in the array, and including a source line bias circuit coupled to the plurality of bit line bias circuits, to provide feedback in response to variations in voltage on the source line.
  • 19. The circuit of claim 15, wherein the memory array comprises floating gate memory cells.
  • 20. An in-memory multiply-and-accumulate circuit, comprising: a memory array including a first memory cell on a first word line and a first bit line;a row decoder coupled to a set of word lines that includes the first word line, configured to apply word line voltages to one or more selected word lines in the set;a first bit line bias circuit comprising (i) a digital-to-analog converter to receive a corresponding first input X and to generate an analog signal, and (ii) a bit line clamp transistor to receive the analog signal at a gate of the bit line clamp transistor, and to generate an output connected to the first bit line, the first bit line bias circuit to produce a first bit line bias voltage for the first bit line as a function of the first input X; andcircuits to program the first memory cell with a weight W, wherein the first memory cell is to conduct a current indicative of a product of the weight W and the input X.
US Referenced Citations (137)
Number Name Date Kind
4219829 Dorda et al. Aug 1980 A
4987090 Hsu et al. Jan 1991 A
5029130 Yeh Jul 1991 A
5586073 Hiura et al. Dec 1996 A
6107882 Gabara et al. Aug 2000 A
6313486 Kencke et al. Nov 2001 B1
6829598 Milev Dec 2004 B2
6906940 Lue Jun 2005 B1
6960499 Nandakumar et al. Nov 2005 B2
7089218 Visel Aug 2006 B1
7368358 Ouyang et al. May 2008 B2
7436723 Rinerson et al. Oct 2008 B2
7747668 Nomura et al. Jun 2010 B2
8203187 Lung et al. Jun 2012 B2
8275728 Pino Sep 2012 B2
8432719 Lue Apr 2013 B2
8589320 Breitwisch et al. Nov 2013 B2
8630114 Lue Jan 2014 B2
8725670 Visel May 2014 B2
8860124 Lue et al. Oct 2014 B2
9064903 Mitchell et al. Jun 2015 B2
9147468 Lue Sep 2015 B1
9213936 Visel Dec 2015 B2
9379129 Lue et al. Jun 2016 B1
9391084 Lue Jul 2016 B2
9430735 Vali Aug 2016 B1
9431099 Lee et al. Aug 2016 B2
9524980 Lue Dec 2016 B2
9535831 Jayasena et al. Jan 2017 B2
9536969 Yang et al. Jan 2017 B2
9589982 Cheng et al. Mar 2017 B1
9698156 Lue Jul 2017 B2
9698185 Chen et al. Jul 2017 B2
9710747 Kang et al. Jul 2017 B2
9747230 Han et al. Aug 2017 B2
9754953 Tang et al. Sep 2017 B2
9767028 Cheng et al. Sep 2017 B2
9898207 Kim et al. Feb 2018 B2
9910605 Jayasena et al. Mar 2018 B2
9978454 Jung May 2018 B2
9983829 Ravimohan et al. May 2018 B2
9991007 Lee Jun 2018 B2
10037167 Kwon et al. Jul 2018 B2
10056149 Yamada et al. Aug 2018 B2
10073733 Jain et al. Sep 2018 B1
10157012 Kelner et al. Dec 2018 B2
10175667 Bang et al. Jan 2019 B2
10242737 Lin et al. Mar 2019 B1
10311921 Parkinson Jun 2019 B1
10528643 Choi Jan 2020 B1
10534840 Petti Jan 2020 B1
10635398 Lin et al. Apr 2020 B2
10643713 Louie May 2020 B1
10719296 Lee et al. Jul 2020 B2
10777566 Lue Sep 2020 B2
10783963 Hung et al. Sep 2020 B1
10825510 Jaiswal Nov 2020 B2
10860682 Knag Dec 2020 B2
10957392 Lee et al. Mar 2021 B2
20030122181 Wu Jul 2003 A1
20050287793 Blanchet et al. Dec 2005 A1
20100182828 Shima et al. Jul 2010 A1
20100202208 Endo et al. Aug 2010 A1
20110063915 Tanaka et al. Mar 2011 A1
20110106742 Pino May 2011 A1
20110128791 Chang Jun 2011 A1
20110286258 Chen et al. Nov 2011 A1
20110297912 Samachisa et al. Dec 2011 A1
20120007167 Hung Jan 2012 A1
20120044742 Narayanan Feb 2012 A1
20120182801 Lue Jul 2012 A1
20120235111 Osano et al. Sep 2012 A1
20120254087 Visel Oct 2012 A1
20130070528 Maeda Mar 2013 A1
20130075684 Kinoshita et al. Mar 2013 A1
20140063949 Tokiwa Mar 2014 A1
20140119127 Lung May 2014 A1
20140149773 Huang et al. May 2014 A1
20140268996 Park Sep 2014 A1
20140330762 Visel Nov 2014 A1
20150008500 Fukumoto et al. Jan 2015 A1
20150171106 Suh Jun 2015 A1
20150199126 Jayasena et al. Jul 2015 A1
20150331817 Han et al. Nov 2015 A1
20160141337 Shimabukuro et al. May 2016 A1
20160181315 Lee et al. Jun 2016 A1
20160232973 Jung Aug 2016 A1
20160247579 Ueda Aug 2016 A1
20160308114 Kim et al. Oct 2016 A1
20160336064 Seo et al. Nov 2016 A1
20160358661 Vali et al. Dec 2016 A1
20170003889 Kim et al. Jan 2017 A1
20170025421 Sakakibara et al. Jan 2017 A1
20170092370 Harari Mar 2017 A1
20170123987 Cheng et al. May 2017 A1
20170148517 Harari May 2017 A1
20170160955 Jayasena et al. Jun 2017 A1
20170169885 Tang et al. Jun 2017 A1
20170169887 Widjaja Jun 2017 A1
20170263623 Zhang et al. Sep 2017 A1
20170270405 Kurokawa Sep 2017 A1
20170309634 Noguchi et al. Oct 2017 A1
20170316833 Ihm et al. Nov 2017 A1
20170317096 Shin et al. Nov 2017 A1
20170337466 Bayat et al. Nov 2017 A1
20180121790 Kim et al. May 2018 A1
20180129424 Confalonieri et al. May 2018 A1
20180144240 Garbin et al. May 2018 A1
20180157488 Shu et al. Jun 2018 A1
20180173420 Li et al. Jun 2018 A1
20180176497 Saha Jun 2018 A1
20180189640 Henry et al. Jul 2018 A1
20180240522 Jung Aug 2018 A1
20180246783 Avraham et al. Aug 2018 A1
20180286874 Kim et al. Oct 2018 A1
20180342299 Yamada et al. Nov 2018 A1
20180350823 Or-Bach et al. Dec 2018 A1
20190019564 Li Jan 2019 A1
20190035449 Saida Jan 2019 A1
20190043560 Sumbul et al. Feb 2019 A1
20190058448 Seebacher Feb 2019 A1
20190065151 Chen et al. Feb 2019 A1
20190102170 Chen Apr 2019 A1
20190148393 Lue May 2019 A1
20190164044 Song et al. May 2019 A1
20190213234 Bayat Jul 2019 A1
20190220249 Lee et al. Jul 2019 A1
20190244662 Lee et al. Aug 2019 A1
20190286419 Lin et al. Sep 2019 A1
20190311749 Song et al. Oct 2019 A1
20190325959 Bhargava et al. Oct 2019 A1
20190363131 Torng et al. Nov 2019 A1
20200026993 Otsuka Jan 2020 A1
20200065650 Tran Feb 2020 A1
20200110990 Harada Apr 2020 A1
20200160165 Sarin May 2020 A1
20200334015 Shibata Oct 2020 A1
Foreign Referenced Citations (17)
Number Date Country
1998012 Nov 2010 CN
105718994 Jun 2016 CN
105789139 Jul 2016 CN
106530210 Mar 2017 CN
2048709 Apr 2009 EP
201523838 Jun 2015 TW
201618284 May 2016 TW
201639206 Nov 2016 TW
201732824 Sep 2017 TW
201741943 Dec 2017 TW
201802800 Jan 2018 TW
201822203 Jun 2018 TW
2012009179 Jan 2012 WO
2012015450 Feb 2012 WO
2016060617 Apr 2016 WO
2017091338 Jun 2017 WO
2018201060 Nov 2018 WO
Non-Patent Literature Citations (64)
Entry
F. Merrikh-Bayat et al. “High-Performance Mixed-Signal Neurocomputing With Nanoscale Floating-Gate Memory Cell Arrays,” in IEEE Transactions on Neural Networks and Learning Systems, vol. 29, No. 10, pp. 4782-4790, Oct. 2018, doi: 10.1109/TNNLS.2017.27789 (Year: 2018).
X. Guo et al., “Fast, energy-efficient, robust, and reproducible mixed-signal neuromorphic classifier based on embedded NOR flash memory technology,” 2017 IEEE International Electron Devices Meeting (IEDM), San Francisco, CA, 2017, pp. 6.5.1-6.5.4, doi: 10.1109/IEDM.2017.8268341. (Year: 2017).
S. Aritome, R. Shirota, G. Hemink, T. Endoh and F. Masuoka, “Reliability issues of flash memory cells,” in Proceedings of the IEEE, vol. 81, No. 5, pp. 776-788, May 1993, doi: 10.1109/5.220908. (Year: 1993).
U.S. Office Action in U.S. Appl. No. 16/233,414 dated Apr. 20, 2020, 17 pages.
Chen et al., “Eyeriss: An Energy-Efficient reconfigurable accelerator for deep convolutional neural networks,” IEEE ISSCC, Jan. 31-Feb. 4, 2016, 3 pages.
EP Extended Search Report from EP19193290.4 dated Feb. 14, 2020, 10 pages.
Gonugondla et al., “Energy-Efficient Deep In-memory Architecture for NAND Flash Memories,” IEEE International Symposium on Circuits and Systems (ISCAS), May 27-30, 2018, 5 pages.
Jung et al, “Three Dimensionally Stacked NAND Flash Memory Technology Using Stacking Single Crystal Si Layers on ILD and TANOS Structure for Beyond 30nm Node,” International Electron Devices Meeting, 2006. IEDM '06, Dec. 11-13, 2006, pp. 1-4.
Lai et al., “A Multi-Layer Stackable Thin-Film Transistor (TFT) NAND-Type Flash Memory,” Electron Devices Meeting, 2006, IEDM '06 International, Dec. 11-13, 2006, pp. 1-4.
TW Office Action from TW Application No. 10820980430, dated Oct. 16, 2019, 6 pages (with English Translation).
U.S. Office Action in U.S. Appl No. 15/873,369 dated Dec. 4, 2019, 5 pages.
U.S. Office Action in U.S. Appl No. 15/887,166 dated Jul. 10, 2019, 18 pages.
U.S. Office Action in U.S. Appl No. 15/922,359 dated Oct. 11, 2019, 7 pages.
U.S. Office Action in U.S. Appl. No. 16/233,414 dated Oct. 31, 2019, 22 pages.
U.S. Office Action in related case U.S. Appl. No. 16/037,281 dated Dec. 19, 2019, 9 pages.
U.S. Office Action in related case U.S. Appl. No. 16/297,504 dated Feb. 4, 2020, 15 pages.
Wang et al., “Three-Dimensional NAND Flash for Vector-Matrix Multiplication,” IEEE Trans. on Very Large Scale Integration Systems (VLSI), vol. 27, No. 4, Apr. 2019, 4 pages.
Anonymous, “Data in the Computer”, May 11, 2015, pp. 1-8, https://web.archive.org/web/20150511143158/https:// homepage.cs.uri .edu/faculty/wolfe/book/Readings/Reading02.htm (Year. 2015)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner.
Rod Nussbaumer, “How is data transmitted through wires in the computer?”, Aug. 27, 2015, pp. 1-3, https://www.quora.com/ How-is-data-transmitted-through-wires-in-the-computer (Year: 2015)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner.
Scott Thornton, “What is DRAm (Dynamic Random Access Memory) vs SRAM?”, Jun. 22, 2017, pp. 1-11, https://www .microcontrollertips.com/dram-vs-sram/ (Year: 2017)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner.
TW Office Action from TW Application No. 10920683760, dated Jul. 20, 2020, 4 pages.
U.S. Office Action in U.S. Appl. No. 16/233,404 dated Jul. 30, 2020, 20 pages.
U.S. Office Action in U.S. Appl. No. 16/279,494 dated Aug. 17, 2020, 25 pages.
Webopedia, “DRAM—dynamic random access memory”, Jan. 21, 2017, pp. 1-3, https://web.archive.org/web/20170121124008/https://www.webopedia.com/TERM/D/DRAM.html (Year: 2017)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner.
Webopedia, “volatile memory”, Oct. 9, 2017, pp. 1-4, https://web.archive.org/web/20171009201852/https://www.webopedia.com/TERMN/volatile_memory.html (Year: 2017)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no year provided by examiner.
U.S. Office Action in related case U.S. Appl. No. 15/873,369 dated May 9, 2019, 8 pages.
Lue et al., “A Novel 3D AND-type NVM Architecture Capable of High-density, Low-power In-Memory Sum-of-Product Computation for Artificial Intelligence Application,” IEEE VLSI, Jun. 18-22, 2018, 2 pages.
U.S. Office Action in U.S. Appl. No. 15/922,359 dated Jun. 24, 2019, 8 pages.
EP Extended Search Report from 18155279.5—1203 dated Aug. 30, 2018, 8 pages.
EP Extended Search Report from EP18158099.4 dated Sep. 19, 2018, 8 pages.
Jang et al., “Vertical cell array using TCAT(Terabit Cell Array Transistor) technology for ultra high density NAND flash memory,” 2009 Symposium on VLSI Technology, Honolulu, HI, Jun. 16-18, 2009, pp. 192-193.
Kim et al. “Multi-Layered Vertical Gate NAND Flash Overcoming Stacking Limit for Terabit Density Storage,” 2009 Symposium on VLSI Technology Digest of Papers, Jun. 16-18, 2009, 2 pages.
Kim et al. “Novel Vertical-Stacked-Array-Transistor (VSAT) for Ultra-High-Density and Cost-Effective NAND Flash Memory Devices and SSD (Solid State Drive)”, Jun. 2009 Symposium on VLSI Technolgy Digest of Technical Papers, pp. 186-187. (cited in parent—copy not provided herewith).
Ohzone et al., “Ion-Implanted Thin Polycrystalline-Silicon High-Value Resistors for High-Density Poly-Load Static RAM Applications,” IEEE Trans. on Electron Devices, vol. ED-32, No. 9, Sep. 1985, 8 pages.
Sakai et al., “A Buried Giga-Ohm Resistor (BGR) Load Static RAM Cell,” IEEE Symp. on VLSI Technology, Digest of Papers, Sep. 10-12, 1984, 2 pages.
Schuller et al., “Neuromorphic Computing: From Materials to Systems Architecture,” US Dept. of Energy, Oct. 29-30, 2015, Gaithersburg, MD, 40 pages.
Seo et al., “A Novel 3-D Vertical FG NAND Flash Memory Cell Arrays Using the Separated Sidewall Control Gate (S-SCG) for Highly Reliable MLC Operation,” 2011 3rd IEEE International Memory Workshop (IMW), May 22-25, 2011, 4 pages.
Soudry, et al. “Hebbian learning rules with memristors,” Center for Communication and Information Technologies CCIT Report #840, Sep. 1, 2013, 16 pages.
Tanaka H., et al., “Bit Cost Scalable Technology with Punch and Plug Process for Ultra High Density Flash Memory,” 2007 Symp. VLSI Tech., Digest of Tech. Papers, pp. 14-15.
U.S. Office Action in U.S. Appl. No. 15/887,166 dated Jan. 30, 2019, 18 pages.
U.S. Appl. No. 15/873,369, filed Jan. 17, 2018, entitled “Sum-of-Products Accelerator Array,” Lee et al., 52 pages.
U.S. Appl. No. 15/887,166 filed Feb. 2, 2018, entitled “Sum-of-Products Array for Neuromorphic Computing System,” Lee et al., 49 pages.
U.S. Appl. No. 15/895,369 filed Feb. 13, 2018, entitled “Device Structure for Neuromorphic Computing System,” Lin et al., 34 pages.
U.S. Appl. No. 15/922,359 filed Mar. 15, 2018, entitled “Voltage Sensing Type of Matrix Multiplication Method for Neuromorphic Computing System,” Lin et al., 40 pages.
U.S. Appl. No. 16/037,281 filed Jul. 17, 2018, 87 pages.
Whang, SungJin et al. “Novel 3-dimensional Dual Control-gate with Surrounding Floating-gate (DC-SF) NAND flash cell for 1Tb file storage application,” 2010 IEEE Int'l Electron Devices Meeting (IEDM), Dec. 6-8, 2010, 4 pages.
Chen et al., “A Highly Pitch Scalable 3D Vertical Gate (VG) NAND Flash Decoded by a Novel Self-Aligned Independently Controlled Double Gate (IDG) StringSelect Transistor (SSL),” 2012 Symp. on VLSI Technology (VLSIT), Jun. 12-14, 2012, pp. 91-92.
Choi et al., “Performance Breakthrough in NOR Flash Memory With Dopant-Segregated Schottky-Barrier (DSSB) SONOS Devices,” Jun. 2009 Symposium on VLSITechnology Digest of Technical Papers, pp. 222-223.
Fukuzumi et al. “Optimal Integration and Characteristics of Vertical Array Devices for Ultra-High Density, Bit-Cost Scalable Flash Memory,” IEEE Dec. 2007, pp. 449-452.
Hsu et al., “Study of Sub-30nm Thin Film Transistor (TFT) Charge-Trapping (CT) Devices for 3D NAND Flash Application,” 2009 IEEE, Dec. 7-9, 2009, pp. 27.4.1-27.4.4.
Hubert et al., “A Stacked SONOS Technology, Up to 4 Levels and 6nm Crystalline Nanowires, With Gate-All-Around or Independent Gates (Flash), Suitable for Full 3D Integration,” IEEE 2009, Dec. 7-9, 2009, pp. 27.6.1-27.6.4.
Hung et al., “A highly scalable vertical gate (VG) 3D NAND Flash with robust program disturb immunity using a novel PN diode decoding structure,” 2011 Symp. on VLSI Technology (VLSIT), Jun. 14-16, 2011, pp. 68-69.
Katsumata et al., “Pipe-shaped BiCS Flash Memory With 16 Stacked Layers and Multi-Level-Cell Operation for Ultra High Density Storage Devices,” 2009 Symposium on VLSI Technology Digest of Technical Papers, Jun. 16-18, 2009, pp. 136-137. cited byapplicant.
Kim et al., “Novel 3-D Structure for Ultra High Density Flash Memory with VRAT (Vertical-Recess-Array-Transistor) and PIPE (Planarized Integration on the same PlanE),” IEEE 2008 Symposium on VLSI Technology Digest of Technical Papers, Jun. 17-19, 2008, pp. 122-123.
Kim et al., “Three-Dimensional NAND Flash Architecture Design Based on Single-Crystalline STacked ARray,” IEEE Transactions on Electron Devices, vol. 59, No. 1, pp. 35-45, Jan. 2012.
Lue et al., “A Highly Scalable 8-Layer 3D Vertical-Gate (VG) TFT NAND Flash Using Junction-Free Buried Channel BE-SONOS Device”, 2010 Symposium on VLSI Technology Digest of Technical Papers, pp. 131-132, Jun. 15-17, 2010.
Nowak et al., “Intrinsic fluctuations in Vertical NAND flash memories,” 2012 Symposium on VLSI Technology, Digest of Technical Papers, pp. 21-22, Jun. 12-14, 2012.
TW Office Action from TW Application No. 10820907820, dated Sep. 22, 2020, 41 pages.
Wang, Michael, “Technology Trends on 3D-NAND Flash Storage”, Impact 2011, Taipei, dated Oct. 20, 2011, found at www.impact.org.tw/2011/Files/NewsFile/201111110190.pdf.
Y.X. Liu et al., “Comparative Study of Tri-Gate and Double-Gate-Type Poly-Si Fin-Channel Spli-Gate Flash Memories,” 2012 IEEE Silicon Nanoelectronics Workshop (SNW), Honolulu, HI, Jun. 10-11, 2012, pp. 1-2.
U.S. Office Action in U.S. Appl. No. 16/224,602 dated Mar. 24, 2021, 17 pages.
U.S. Office Action in U.S. Appl. No. 16/279,494 dated Nov. 12, 2020, 25 pages.
Webopedia, “SoC”, Oct. 5, 2011, pp. 1-2, https://web.archive.org/web/20111005173630/https://www.webopedia.com/ TERM/S/SoC.html (Year: 2011)—See Office Action dated Aug. 17, 2020 in U.S. Appl. No. 16/279,494 for relevance—no month provided by examiner.
U.S. Office Action in U.S. Appl. No. 16/224,602 dated Nov. 23, 2020, 14 pages.
Related Publications (1)
Number Date Country
20200301667 A1 Sep 2020 US