Solid State Drives (SSD's) use standard read instructions (e.g., READ or READ PAGE instruction) perform a read of a memory cell at a default threshold voltage within each threshold voltage region required to define a bit of the memory cell. Single Level Cell (SLC) flash memory devices store a single bit of information in each cell and only require a read in a single threshold voltage region (the threshold voltage region is the region that extends between the center of the voltage distribution for a 1 and the center of the voltage distribution for a 0) to identify the value of a bit (whether the cell is storing a 1 or a 0). Multi-level cell (MLC) flash memory devices store two bits of information in each cell, triple level cell (TLC) flash memory devices store three bits of information in each cell, quad level cell (QLC) flash memory devices store four bits of information in each cell and penta level cell (PLC) flash memory devices store five bits of information in each cell.
Some SSD's use threshold-voltage-shift reads for reading flash memory devices to obtain low levels of Uncorrectable Bit Error Rate (UBER) required for client and enterprise SSD's. Threshold-voltage-shift reads are performed by sending a threshold-voltage-shift read instruction to a flash memory device that is to be read. One or more threshold-Voltage-Shift Offset (TVSO) value is sent with the threshold-voltage-shift read instruction. The TVSO value indicates the amount by which the threshold voltage that is used to perform the read is to be offset from a corresponding default threshold voltage that is specified by the manufacturer of the flash memory device. Threshold-voltage-shift read instructions for MLC, TLC, QLC and PLC flash memory devices require that multiple TVSO values be sent to the flash memory device in order to perform each read.
Systems that use threshold-voltage-shift read instructions for reading flash memory devices typically require background reads of the flash memory devices to identify the correct TVSO value to use in performing reads. The background reads require significant bandwidth, reducing the amount of bandwidth available for performing host-requested operations.
For systems that use threshold-voltage-shift read instructions for reading flash memory devices, there is a need for a method and apparatus that will allow for identifying TVSO values to be used in reads of a flash memory device that does not require background reads for identifying TVSO values, and that will allow for maintaining UBER within acceptable levels during the lifetime of the SSD.
A method for identifying TVSO values to be used for reading a flash memory includes storing configuration files for a plurality of retention-and-read-disturb (RRD)-compensating regression neural networks (RNNs); identifying a current number of program and erase (PE) cycles of the flash memory; identifying TVSO values corresponding to the identified current number of PE cycles and identifying a current retention time and a current number of read disturbs for the flash memory. The configuration file of the RRD-compensating RNN corresponding to the current number of PE cycles, the current retention time and the current number of read disturbs for the flash memory is selected and is loaded into a neural network engine to form an RNN core in the neural network engine. A neural network operation of the RNN core is performed to predict RRD-compensated TVSO values. The input to the neural network operation includes the identified TVSO values corresponding to the current number of PE cycles of the flash memory. The predicted RRD-compensated TVSO values are optionally stored. A read of the flash memory is performed using the predicted RRD-compensated TVSO values.
A flash controller includes a data storage; a neural network engine coupled to the data storage; and a status module configured for identifying a current number of PE cycles, a current retention time and a current number of read disturbs for a flash memory. A control module is coupled to the neural network engine, the data storage and the status module. The control module to identify TVSO values corresponding to a current number of PE cycles of the flash memory, the control module further to select a configuration file of a RRD-compensating RNN corresponding to the current number of PE cycles, the current retention time and the current read disturb of the flash memory; and to load the selected configuration file of the RRD-compensating RNN into the neural network engine to form a RNN core in the neural network engine. The neural network engine to perform a neural network operation of the RNN core to predict RRD-compensated TVSO values, the input to the neural network operation including the identified TVSO values corresponding to the current number of PE cycles of the flash memory. A read module is coupled to the status module and to the neural network engine. The read module to perform a read of the flash memory using the predicted RRD-compensated TVSO values.
The method and apparatus of the present invention provides a simple and accurate process for identifying TVSO values to be used in reads of a flash memory device and does not require background reads for identifying the TVSO values. Thereby the bandwidth that would have been consumed in performing background reads is available for host-requested read, program and erase operations.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in, and constitute a part of, this specification. The drawings illustrate various examples. The drawings referred to in this brief description are not drawn to scale.
Flash controller 3 includes data storage 4, status module 5, read module 6, decode module 7, write module 8, control module 9, neural network engine 10 and input and output (I/O) module 11. Control module 9 is coupled to data storage 4, status module 5, read module 6, decode module 7, write module 8, neural network engine 10 and input and output (I/O) module 11. Decode module 7 is further coupled to read module 6. Status module 5 is further coupled to data storage 4, read module 6, write module 8 and neural network engine 10. Read module 6 is further coupled to data storage 4, neural network engine 10 and I/O module 11. Neural network engine 10 is further coupled to data storage 4. Input and output (I/O) module 11 is further coupled to data storage 4 and write module 8.
Some or all of modules 5-11, include circuits that are dedicated circuits for performing operations, and some or all of modules 5-11 can be firmware that includes instructions that are performed on one or more processor for performing operations of flash controller 3, with the instructions stored in registers of one or more of modules 5-11 and/or stored in data storage 4 or memory device 13. Some of all of modules 5-11 include processors for performing instructions and instructions are loaded into flash controller 3 prior to operation of flash controller 3 by a user.
Flash controller 3 is configured to receive read and write instructions from a host computer at I/O module 11 and to perform program operations, erase operations and read operations on memory cells of flash memory devices 2 to complete the instructions from the host computer. For example, upon receiving a write instruction from a host computer, write module 8 is operable to program codewords into one or more of flash memory devices 2.
Reads of flash memory devices 2 are performed by sending a threshold-voltage-shift read instruction to the flash memory device 2 that is to be read. One or more TVSO value are sent with the threshold-voltage-shift read instruction. Flash memory devices 2 can be MLC flash memory devices, TLC flash memory devices, QLC flash memory devices or PLC flash memory devices. Status module 5 is operable to track the status and the operations of flash controller 3 and flash memory devices 2. Data storage module 4 is a structure in flash controller 3 that is capable of storing data, and may include cache memory and/or static random-access memory (SRAM). Neural network engine 11 includes a specialized hardware module (e.g., a specialized configurable accelerator/processing circuit) specifically configured to perform a neural network operation.
In one example RRD-compensating RNN 30 is generated by performing flash characterization testing to identify how the representative flash memory devices that are being tested perform under varying retention-time conditions and varying read-disturb conditions (together referred to as “transitory conditions”), where the varying retention-time and read-disturb conditions, i.e. the varying transitory conditions, are represented by corresponding transitory-reliability-states. The term “transitory-reliability-state” as used in the present application refers to a reliability state that includes the transitory characteristics of the flash memory devices, and specifically includes a reliability state in which the current number of program erase (PE) cycles, the retention time (RT) and the number of read disturbs (RD) are specified. The term “non-transitory-reliability-state” as used in the present application refers to a reliability state that relates to the non-transitory characteristics of the flash memory devices, and specifically only includes reliability states that have zero values for the transitory conditions of the flash memory devices (where RT=0 and RD=0). The term “reliability state,” as used in the present application is an interval relating to age and usage of a memory location (e.g. a particular wordline/block/page) of a flash memory device that can be defined using measurements such as the number of PE cycles, RT and RD.
In one example, the lifetime of each flash memory 2 is divided into a plurality of periods based on the number of PE cycles in the lifetime of the flash memory and a set of RRD-compensating RNN's is associated with each of the plurality of periods. In one example that is illustrated in
In one example, retention time is divided into four retention-time (RT) categories and the number of read disturbs (RD) is divided into four categories. It is appreciated that these categories are illustrative and that more or fewer categories could be used.
Referring now to
In one example, during flash characterization testing, one or more representative NAND flash memory is cycled to reach one of the non-transitory-reliability-states and the TVSO values minimizing the raw bit error rate (RBER) are calculated for all of the wordlines of all the blocks to generate a training dataset. The one or more representative NAND flash memory is then stressed through all the corresponding transitory-reliability-states and, for each transitory-reliability-state, TVSO values minimizing the RBER are calculated for all the wordlines of all the blocks.
In one example shown in
Following is an example of data records generated for the first period shown in
For each period, the data record corresponding to the non-transitory-reliability-state is used to train each of the RRD-compensating RNN's and the data records corresponding to one of the transitory-reliability-states is used as the target data set in the training. In one example, some of the records corresponding to the particular transitory-reliability state are used for verification and after the RRD-compensating RNN's are trained and verified a configuration file 23 for the trained RRD-compensating RNN is stored. The configuration file 23 includes hyperparameters (bias values and weighting values) corresponding to the trained RRD-compensating RNN 30. The configuration file 23 can also indicate the architecture of the neural network. In one example training uses TVSO values 61a from data records 61 as a training data set to train a first neural network 42-1m (that corresponds to reliability state 42-1) using TVSO values 62a from data records 62 as a target data set. The training can use a stochastic gradient descent (SGD) training process, performed to achieve predetermined performance criteria (e.g., a min-squared error value) for TVSO values responsive to perturbance of the transitory conditions. Alternatively, other types of training processes can be used such as, for example, an adaptive moment estimation (ADAM) training process. Continuing with
Referring now to
Referring now to
Thus, in the present example, since there are 15 RRD-compensating RNNs corresponding to each period and there are five periods, a total of 75 different RRD-compensating RNN's are generated. However, alternatively, more or fewer periods could be used and more or fewer transitory-reliability states could be used.
Continuing with
TVSO values corresponding to the current number of PE cycles are identified (103). The identified TVSO values can be the TVSO values specified by the manufacturer of the flash memory for the current number of PE cycles of the flash memory 2. In one example, the identified TVSO values are the TVSO values specified by the manufacturer for the current number of PE cycles and for a particular block and/or wordline and/or page of flash memory device 2.
In one example, step 103 is performed using a PE-TVSO lookup table that includes PE cycle values and associated TVSO values. The PE-TVSO table can also specify TVSO values corresponding to blocks and/or pages and/or wordlines or groups of blocks and/or groups of wordlines and/or groups of pages as is known in the art. In this example, a PE-TVSO lookup table 24 is stored in data storage 4 of
In one example of step 103, the five periods corresponding to non-transitory-reliability-states 41-1, 41-2, 41-3, 41-4 and 41-5 shown in
A current retention time and a current number of read disturbs for the flash memory are identified (104). Status module 5 of
The configuration file of the RRD-compensating RNN corresponding to the current number of PE cycles, the current retention time and current number of read disturbs for the flash memory is selected (105) and the selected configuration file of the RRD-compensating RNN is loaded (106) into a neural network engine to form an RNN core in the neural network engine. In one example control module 9 of
In one example, configuration-file-lookup table 26 includes an indication of a number of PE cycles, RT and RD and indicates, for each particular indication of the number of PE cycles, RT and RD an identifier of an associated RRD-compensating RNN configuration file 23. The identifier of the associated RRD-compensating RNN configuration file 23 can be in the form of an index that identifies the RRD-compensating RNN configuration file 23 or an address indicating where the RRD-compensating configuration file 23 is stored. In one example, control module 9 is operable in step 105 to performing a lookup in configuration-file-lookup table 26 using the current number of PE cycles, the current RT and the current RD of a particular block to identify the address in data storage 4 where the configuration file 23 to use is stored and the address in data storage 4 is utilized to obtain the RRD-compensating RNN configuration file 23 to be loaded into the neural network engine 10.
In one example the architecture of the neural network engine 10 is fixed, predetermined, or is otherwise established prior to starting method 100 and each of the stored RRD-compensating RNNs have the same number of input neurons, output neurons, connections between neurons and use the same activation functions such that the selecting and loading of steps 105-106 require only the selection and loading of bias values and weighting values into neural network 10. In one example, neural network engine 10 includes configurable logic and the neural network engine is configured in accordance with an architecture that is common to each of the RRD-compensating RNNs prior to performing step 101.
A neural network operation of the RNN core is performed (107) to predict RRD-compensated TVSO values, the input to the neural network operation including the identified TVSO values corresponding to the current number of PE cycles of the flash memory. In one example, neural network engine 10 is operable to perform a neural network operation of the RNN core to predict RRD-compensated TVSO values, the input to the neural network operation comprising the identified TVSO values corresponding to the current number of PE cycles of the flash memory 2.
Optionally, the predicted RRD-compensated TVSO values 83 are stored (108). In one example, control module 9 is optionally operable to store the predicted RRD-compensated TVSO values 83 in data storage 4 or in memory device 13. In one example, the predicted RRD-compensated TVSO values 83 are stored in a TVSO-read lookup table 25 that is stored in data storage 4. In one example, TVSO-read lookup table 25 includes an indication of a wordline index, a block number (and optionally a page indicator) and, for each indication of a wordline index and block number (and optional page indicator), the associated RRD-compensated TVSO values to be used for reading that wordline/block/page.
Steps 102-108 are optionally repeated (109). In one example, steps 102-108 are repeated each time that the current number of PE cycles reaches a predetermined PE cycle threshold, RT reaches a predetermined retention-time threshold, or the current RD reaches a read-disturb threshold. In one example, control module 9 is configured to determine when the current number of PE cycles reaches a predetermined PE cycle threshold, when the current RT reaches a predetermined retention-time threshold and when the current RD reaches a read-disturb threshold, and each time the current number of PE cycles reaches a predetermined PE cycle threshold, the current RT reaches a predetermined retention-time threshold or the current RD reaches a read-disturb threshold: the control module 9 is operable to repeat the identifying a current number of PE cycles, the identifying TVSO values, the identifying a current RT and RD, the selecting the configuration file 23 and the loading the selected configuration file 23, the neural network engine 10 is operable to repeat the performing a neural network operation of the RNN core to predict the RRD-compensated TVSO values, and the control module is configured to repeat the updating the TVSO values to be used for reading the flash memory 2 by replacing at least some of the TVSO values to be used for reading the flash memory 2 stored in the TVSO-read lookup table 25 with the predicted RRD-compensated TVSO values so as to keep the values in TVSO-read lookup table 25 current (corresponding to the current transitory-reliability-state). In one example, a plurality of PE cycle thresholds are used, including a PE cycle threshold corresponding to each of the periods used to generate the RRD-compensating RNN's (corresponding to the non-transitory-reliability-states used to generate the RDD-compensating RNN's) such that the predicted RRD-compensated TVSO values stored in the TVSO-read lookup table correspond to the current period in the lifetime of the flash memory 2. In one example, a plurality of retention time thresholds are used (including a retention time threshold corresponding to the retention time limits of each of the transitory-reliability states) and a plurality of read disturb thresholds are used (including a read-disturb threshold corresponding to the read disturb limits of each of the transitory-reliability states used to generate the RRD-compensating RNN's) such that the predicted RRD-compensated TVSO values stored in the TVSO-read lookup table correspond to the current transitory-reliability-state of the flash memory 2.
A read of the flash memory is performed (110) using the predicted RRD-compensated TVSO values. In one example, read module 6 of
In one example, in response to a receiving a read instruction that identifies a read address: the status module 5 is operable to identify a current number of PE cycles, current RT and current RD corresponding to the read address, the read module 6 is configured to perform a lookup in the TVSO-read lookup table 25 to identify a current set of TVSO values corresponding to the read address, the current number of PE cycles, the current RT, and the current RD, and the read module is configured to perform the read of the flash memory 2 in step 110 using the identified current set of TVSO values.
The present method and apparatus does not require background reads of flash memories 2 to identify the TVSO values to be used for performing reads of flash memories 2. Accordingly, the bandwidth that conventional SSDs that require reads of flash memories 2 require for performing background reads is available to flash memory devices 2, resulting in performance improvement as compared to conventional SSD's that require background reads for identifying TVSO values to be used for performing reads.
In the description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one of ordinary skill in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention.
The present application claims priority to U.S. Provisional Patent Application Ser. No. 63/192,543 filed on May 24, 2021, the contents of which are incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6567313 | Tanaka et al. | May 2003 | B2 |
6704871 | Kaplan et al. | Mar 2004 | B1 |
7529215 | Osterling et al. | May 2009 | B2 |
7650480 | Jiang | Jan 2010 | B2 |
7930623 | Pisek et al. | Apr 2011 | B2 |
8429325 | Onufryk et al. | Apr 2013 | B1 |
8621318 | Micheloni et al. | Dec 2013 | B1 |
8656257 | Micheloni et al. | Feb 2014 | B1 |
8665648 | Mun et al. | Mar 2014 | B2 |
8694849 | Micheloni et al. | Apr 2014 | B1 |
8694855 | Micheloni et al. | Apr 2014 | B1 |
8707122 | Micheloni et al. | Apr 2014 | B1 |
8966335 | Lunelli et al. | Feb 2015 | B2 |
8971112 | Crippa et al. | Mar 2015 | B2 |
8990661 | Micheloni et al. | Mar 2015 | B1 |
9092353 | Micheloni et al. | Jul 2015 | B1 |
9128858 | Micheloni et al. | Sep 2015 | B1 |
9235467 | Micheloni et al. | Jan 2016 | B2 |
9251909 | Camp et al. | Feb 2016 | B1 |
9268531 | Woo et al. | Feb 2016 | B1 |
9292428 | Kanamori et al. | Mar 2016 | B2 |
9305661 | Micheloni et al. | Apr 2016 | B2 |
9397701 | Micheloni et al. | Jul 2016 | B1 |
9417804 | Micheloni et al. | Aug 2016 | B2 |
9444655 | Sverdlov et al. | Sep 2016 | B2 |
9448881 | Micheloni et al. | Sep 2016 | B1 |
9450610 | Micheloni et al. | Sep 2016 | B1 |
9454414 | Micheloni et al. | Sep 2016 | B2 |
9590656 | Micheloni et al. | Mar 2017 | B2 |
9747200 | Micheloni | Aug 2017 | B1 |
9799405 | Micheloni et al. | Oct 2017 | B1 |
9813080 | Micheloni et al. | Nov 2017 | B1 |
9886214 | Micheloni et al. | Feb 2018 | B2 |
9892794 | Micheloni et al. | Feb 2018 | B2 |
9899092 | Micheloni | Feb 2018 | B2 |
10152273 | Micheloni et al. | Dec 2018 | B2 |
10157677 | Marelli et al. | Dec 2018 | B2 |
10216422 | Kim | Feb 2019 | B2 |
10230396 | Micheloni et al. | Mar 2019 | B1 |
10283215 | Marelli et al. | May 2019 | B2 |
10291263 | Marelli et al. | May 2019 | B2 |
10332613 | Micheloni et al. | Jun 2019 | B1 |
10490288 | Wang et al. | Nov 2019 | B1 |
10715307 | Jin | Jul 2020 | B1 |
10861562 | Xiong et al. | Dec 2020 | B1 |
11398291 | Zuolo et al. | Jul 2022 | B2 |
20020144210 | Borkenhagen et al. | Oct 2002 | A1 |
20060106743 | Horvitz | May 2006 | A1 |
20060282603 | Onufryk et al. | Dec 2006 | A1 |
20070076873 | Yamamoto et al. | Apr 2007 | A1 |
20110255453 | Roh et al. | Oct 2011 | A1 |
20120166714 | Mun et al. | Jun 2012 | A1 |
20120287719 | Mun et al. | Nov 2012 | A1 |
20140040697 | Loewenstein | Feb 2014 | A1 |
20140281800 | Micheloni et al. | Sep 2014 | A1 |
20140281823 | Micheloni et al. | Sep 2014 | A1 |
20140310534 | Gurgi et al. | Oct 2014 | A1 |
20150033037 | Lidman | Jan 2015 | A1 |
20150049548 | Park | Feb 2015 | A1 |
20160072527 | Suzuki et al. | Mar 2016 | A1 |
20160247581 | Suzuki et al. | Aug 2016 | A1 |
20160266791 | Lin et al. | Sep 2016 | A1 |
20170133107 | Ryan et al. | May 2017 | A1 |
20170213597 | Micheloni | Jul 2017 | A1 |
20180033490 | Marelli et al. | Feb 2018 | A1 |
20190004734 | Kirshenbaum et al. | Jan 2019 | A1 |
20190087119 | Oh et al. | Mar 2019 | A1 |
20190095794 | López et al. | Mar 2019 | A1 |
20190317901 | Kachare et al. | Oct 2019 | A1 |
20200004455 | Williams et al. | Jan 2020 | A1 |
20200066361 | Ioannou et al. | Feb 2020 | A1 |
20200074269 | Trygg et al. | Mar 2020 | A1 |
20200125955 | Klinger et al. | Apr 2020 | A1 |
20200151539 | Oh et al. | May 2020 | A1 |
20200183826 | Beaudoin et al. | Jun 2020 | A1 |
20200185027 | Rom et al. | Jun 2020 | A1 |
20200210831 | Zhang et al. | Jul 2020 | A1 |
20210192333 | Thiruvengadam et al. | Jun 2021 | A1 |
20220027083 | Zuolo et al. | Jan 2022 | A1 |
20220051730 | Choi et al. | Feb 2022 | A1 |
20220058488 | Zuolo et al. | Feb 2022 | A1 |
20220165348 | Zuolo et al. | May 2022 | A1 |
20220188604 | Zuolo et al. | Jun 2022 | A1 |
20220270698 | Zuolo et al. | Aug 2022 | A1 |
20220342582 | Graumann | Oct 2022 | A1 |
Number | Date | Country |
---|---|---|
2020117348 | Dec 2020 | WO |
Entry |
---|
Anonymous, “Training checkpoints | TensorFlow Core”, Dec. 28, 2019, XP055886114, p. 1-p. 8, Retrieved from the Internet. |
PCT/2021/053276, International Search Report and Written Opinion, European Patent Office, dated Feb. 11, 2022. |
U.S. Appl. No. 63/057,278, filed Jul. 27, 2020, Lorenzo Zuolo. |
Noam Shazeer et al: Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer11 , Arxiv .Org, Cornell University Li Bra Ry, 201 Olin Li Bra Ry Cornell University Ithaca, NY 14853, Jan. 23, 2017. |
Yu Cai et al, “Errors in Flash-Memory-Based Solid-State Drives: Analysis, Mitigation, and Recovery” 11 , Arxiv.Org, Cornell University Library, 201 Olin Library Cornell University Ithaca, NY 14853, Nov. 28, 2017. |
PCT/US2021/053276, International Search Report and Written Opinion, European Patent Office, dated Feb. 11, 2022. |
U.S. Appl. No. 17/398,091, filed Aug. 10, 2021, Lorenzo Zuolo. |
U.S. Appl. No. 17/506,735, filed Oct. 21, 2021, Lorenzo Zuolo. |
Number | Date | Country | |
---|---|---|---|
20220375532 A1 | Nov 2022 | US |
Number | Date | Country | |
---|---|---|---|
63192543 | May 2021 | US |