Fixed, deterministic Quality of Service (QoS) in Solid State Drives (SSDs) can be estimated at the time that a flash controller of the SSD is designed by utilizing simulations or pre-silicon emulations using a Field Programmable Gate Array (FPGA) programmable logic device. However, variable, probabilistic QoS cannot be fully addressed at the time that the flash controller is designed because it is strictly correlated with the use of the SSD. Variable, probabilistic QoS factors that affect QoS include non-deterministic user workloads such as multi-virtualized environments with multi-tenants, multiple NAND program suspend operations, and NAND read operations colliding with program operations and multiple queues.
One area where variable, probabilistic features can affect drive performance is in the command queue. Variable, probabilistic factors can result in one or more commands being loaded into the command queue, that when processed, have a latency greater than the maximum allowed latency for the SSD. Accordingly, there is a need for a method and apparatus that will allow for control of the effects of variable, probabilistic QoS factors on the commands in the command queue so as to allow for the QoS of the SSD to be maintained within predetermined tolerance requirements over the lifetime of the SSD.
A method for meeting QoS requirements in a flash controller includes receiving a configuration file for a QoS neural network at the memory controller, where the configuration file includes weight and bias values for the QoS neural network, then loading the configuration file for the QoS neural network into the neural network engine. A current command is received at the one or more instruction queue. Feature values corresponding to commands in the one or more instruction queues are identified and the identified feature values are loaded into the neural network engine. A neural network operation of the QoS neural network is performed in the neural network engine using as input to the neural network operation the identified feature values to predict latency of the current command. The predicted latency is compared to a first latency threshold; and when the predicted latency exceeds the first latency threshold one or more of the commands in the one or more instruction queues are modified. When the predicted latency does not exceed the latency threshold, the commands in the one or more instruction queues are not modified. A next command in the one or more instruction queue is performed.
A flash controller is disclosed that includes a write circuit, a decode circuit coupled to the write circuit, a program circuit, an erase circuit, a neural network circuit, a control circuit and an input and output (I/O) circuit. The I/O circuit is to receive a configuration file for a Quality of Service (QoS) neural network. The configuration file includes weight and bias values for the QoS neural network. The control circuit to load the configuration file of the QoS neural network into the neural network engine, to identify feature values corresponding to commands in the one or more instruction queues and to load the identified feature values into the neural network engine. The neural network engine to perform a neural network operation of the QoS neural network to predict latency of the current command, using as input to the neural network operation the identified feature values. The control circuit to compare the predicted latency to a first latency threshold, and when the predicted latency exceeds the first latency threshold, to modify one or more of the commands in the one or more instruction queues. The control circuit to not modify one or more of the commands in the in the one or more instruction queues when the predicted latency does not exceed the first latency threshold. One of the write circuit, the program circuit and the erase circuit to perform a next command in the one or more instruction queue.
By removing, reordering or rescheduling commands in the one or more command queue before the next command is performed, the method and apparatus of the present invention prevents commands from being executed that have unacceptable latency. Thereby control of variable, probabilistic factors (to the extent those factors affect the commands in the command queue) is achieved. Moreover, by removing, reordering or rescheduling commands in command queue that have unacceptable latency, the QoS of the SSD is maintained within the predetermined tolerance requirements for the SSD over the lifetime of the SSD.
The accompanying drawings are included to provide a further understanding of the invention, and are incorporated in, and constitute a part of, this specification. The drawings illustrate various examples. The drawings referred to in this brief description are not drawn to scale.
Flash controller 3 is an integrated circuit device that includes data storage circuit 4, status circuit 5, read circuit 6, decode circuit 7, program circuit 8, control circuit 9, neural network engine 10, instruction queue 14, input and output (I/O) circuit 11 and erase circuit 12. In one example, a plurality of instruction queues 14 are provided, and in one example one or more instruction queues 14 are contained within data storage circuit 4. Control circuit 9 is coupled to data storage circuit 4, status circuit 5, read circuit 6, decode circuit 7, program circuit 8, neural network engine 10, I/O circuit 11, erase circuit 12 and to one or more instruction queues 14. Decode circuit 7 is further coupled to read circuit 6. Status circuit 5 is further coupled to data storage circuit 4, read circuit 6, program circuit 8, neural network engine 10 and erase circuit 12. Read circuit 6 is further coupled to data storage circuit 4, neural network engine 10, I/O circuit 11 and to one or more instruction queues 14. Neural network engine 10 is further coupled to data storage 4. Erase circuit 12 is further coupled to one or more instruction queues 14. I/O circuit 11 is further coupled to data storage 4, program circuit 8 and erase circuit 12. Data storage circuit 4 further comprises a configuration file for QoS Neural Network 15 and optional reduced-noise-pattern lookup table 16.
Some or all of circuits 5-12, include circuits that are dedicated circuits for performing operations, and some or all of circuits 5-12 can be firmware that includes instructions that are performed on one or more processor for performing operations of flash controller 3, with the instructions stored in registers of one or more of circuits 5-12 and/or stored in data storage 4 or memory device 13. Some of all of circuits 5-12 include processors for performing instructions and instructions are loaded into flash controller 3 prior to operation of flash controller 3 by a user.
Instruction queue 14 is operable for storing instructions to be performed by read circuit 6, program circuit 8 and erase circuit 12. Though
I/O circuit 11 includes one or more circuit for coupling input into flash controller 3 and for coupling output from flash controller 3. Flash controller 3 is configured to receive read and write instructions from a host computer at I/O circuit 11, and to perform program operations, erase operations and read operations on memory cells of flash memory devices 2 to complete the instructions from the host computer. For example, upon receiving a write instruction from a host computer, I/O circuit 11 includes one or more circuit for receiving the write instruction and coupling the write instruction to program circuit 8. Program circuit 8 is operable to program codewords into on one or more of flash memory devices 2. Upon receiving a read command from the host computer, I/O circuit 11 receives the write instruction and couples the write instruction to read circuit 6. Read circuit 6 is operable to perform a read of flash memory device 2 by sending a read command to a flash memory device 2 that is to be read and decode circuit 7 is operable to decode the results from the read command. Erase circuit 12 is operable to erase memory locations in one or more of flash memory devices 2. Status circuit 5 is operable to track the status and the operations of flash controller 3 and flash memory devices 2. Data storage circuit 4 is a structure in flash controller 3 that is capable of storing data, and may include cache memory and/or static random-access memory (SRAM). Neural network engine 10 includes a specialized hardware module (e.g., a specialized configurable accelerator) specifically configured to perform a neural network operation.
Continuing with
Configuration file for a QoS neural network 15 is a configuration file for a neural network that uses features that correspond to characteristics of an instruction queue, that can be referred to as “instruction-queue features” for predicting latency of a current command in the instruction queue. Instruction-queue features include one or more of the following features that relate to the commands in the instruction queue at the time that current command is received at the instruction queue, and the scheduling order at the time that current command is received at the instruction queue, that may be referred to jointly as “current instruction queue features.” Instruction queue features at the time that current command is received at the instruction queue may be referred to as “current instruction queue features”. Instruction queue features include one or more of the following: a scheduling order, an erase-suspend count, a program-suspend count, an erase count, a program count, a read count, a command type, a channel index, a die index, a plane index, and a current queue depth, without limitation. Instruction-queue features can also include one or more features relating to commands that were previously in the instruction queue (previous-command features). The term previous-command features, as used in the present application, includes features relating to commands that were previously processed through the one or more instruction queue and specifically includes a write amplification factor and read/write ratio. The scheduling order indicates the order in which the commands are to be executed in all queues, such as, for example, First-In-First-Out (FIFO), Last-In-Last-Out (LIFO), Read-First, Program-First, Erase-First, Read-Program-Interleave. The erase-suspend count indicates the number or erase-suspend commands in the instruction queue that includes the current command. The program-suspend count indicates the number of program-suspend commands in the instruction queue that includes the current command. The erase count indicates the number of erase commands in the instruction queue that includes the current command. The program count indicates the number of program commands in the instruction queue that includes the current command. The read count indicates the number of read commands in the instruction queue that includes the current command. The command type indicates the type of command of the current command. The channel index indicates the channel associated with the current command (the channel that will be used to route the command to the flash memory device 2). The die index indicates the die associated with the current command (the flash memory device 2 die that will receive the command).
The plane index indicates the plane associated with the current command (the plane that will receive the command). In one example, flash memory devices 2 are divided into 2 or 4 planes, where each plane includes a fraction of the total number of blocks of the die. In one example, each flash memory device 2 has two planes and half of the blocks are on the first plane while the other half are on the second plane.
In one example the current instruction queue features include only the command type, channel index, die index associated with the current command. Alternatively, the current instruction queue features include the command type, channel index, die index and plane index of some or all of the other commands in the instruction queue.
The current queue depth indicates the number of commands in the instruction queue that contains the current command. The write amplification factor is a numerical value that represents the number of writes that the SSD has to perform (by commands coupled through the one or more instruction queue) in relation to the number of writes intended by the host computer. The read/write ratio is a numerical value that represents the number reads that the SSD has performed (by read commands coupled through the one or more instruction queue) in a given period divided by to the number of writes performed in the given period.
In one example, the configuration file for a QoS neural network 15 is generated by performing SSD characterization testing of a representative SSD that is the same type as SSD 1 (e.g., includes the same type of memory controller as flash controller 3 and the same type of memory devices as memory devices 2), to calculate latency of a current command in the instruction queue (e.g., the command most recently added to the instruction queue) during the testing, that is referred to hereinafter as the current-test-command. The conditions of each test to calculate latency of the current-test command are stored in a corresponding data record along with the calculated latency, with the conditions of the test that correspond to the current instruction queue features (and optionally previous-command features) referred to jointly as “test-feature values.”
In one example that is illustrated in
In one example, the testing generates data records 30 by using a variety of different standard workflow data sets to perform operations indicated by the instructions in the one or more instruction queues on all channels, die and planes, and using each type of scheduling order that flash controller 3 is capable of using. In one example, scheduling order types include FIFO, LIFO, Read-First (where read commands are performed first), Program-First (where program commands are performed first), Erase-First (where erase commands are performed first) and Read-Program-Interleaving (where read and program commands are interleaved and a read command is immediately followed by a corresponding program command). Testing can be performed at all possible erase-suspend count values, all possible program-suspend count values, all possible erase count values, all possible program count values, all possible read count values, one or more command type values (e.g., read, program, erase, erase-suspend, program-suspend commands, where other types of commands are disregarded). The testing uses a large number of addresses such that all possible channel index values, die index values, plane index values and current queue depth values are cycled through. Testing is further performed at different write amplification values and read/write ratios. In one example, the testing generates over 100,000 data records 30.
In one example that is illustrated in
In one example, some of data records 30 are used for validation and other data records 30 are used for testing, with the data records 30 used for validation referred to as a validation data set and the data records 30 used for testing referred to as a test data set. In this example, a deep neural network is generated and is trained using the training data set to predict latency using the calculated latency values as target values during the training. The resulting neural network is then validated using the validation data set and tested using the test data set. When the neural network passes testing (e.g., meets a predetermined root mean square (RMS) error value) the configuration file (or files) for the QoS neural network 45 are saved.
In one example, the training algorithm selects a set of training records (test dataset) and, when the training operation is completed, it computes the MSE (mean squared error) between the actual latency (i.e., the measured latency) and the predicted latency associated with the samples of the test dataset. If the MSE is below a defined value (e.g., 10-4) then the deep neural network passes the testing criteria and the trained model is saved. If the MSE is not below the defined value additional training records are added to the test dataset and the deep neural network is trained using the enlarged test dataset.
In the present example configuration file for a QoS neural network 15, i.e. configuration files for QoS neural network 45, is generated and saved. After assembly of SSD 1 configuration file for a QoS neural network 15 is loaded into SSD 1 (e.g., downloaded from a web server, received by I/O circuit 11 and stored onto data storage circuit 4 by I/O circuit 11 prior to delivery of SSD 1 to an end user).
Continuing with
In one example the architecture of the neural network engine 10 is fixed, predetermined, or is otherwise established prior to starting method 100 and the configuration file received in step 101 includes only weight and bias values for the QoS neural network such the loading of step 102 requires only the loading of bias values and weighting values into neural network engine 10. Alternatively, neural network engine 10 includes configurable logic; the configuration file of the QoS neural network 15 indicates the number of input neurons, output neurons, connections between neurons and one or more activation functions to use; and neural network engine 10 is configured in accordance with the configuration file of the QoS neural network prior to performing step 102.
A neural network operation of the QoS neural network is performed (105) in the neural network engine 10 to predict latency of the current command. The neural network engine 10 uses as input to the neural network operation the identified feature values corresponding to commands in the instruction queues 14, i.e. the identified instruction-queue feature values, and optionally one or more previous-command feature values.
In one example that is illustrated in
Many other combinations of feature values are possible.
The predicted latency is compared (106) to a first latency threshold. In
When the predicted latency exceeds the first latency threshold, the commands in the one or more instruction queues are modified or the scheduling order for the one or more instruction queues is modified (108). In
Rescheduling one or more erase suspend commands may be achieved by changing (reducing) the maximum number of erase-suspend commands allowed, which results in one or more erase-suspend commands being rescheduled (the rescheduling is operable to remove one or more erase-suspend command from the one or more instruction queue). Rescheduling one or more program-suspend commands may be achieved by changing the maximum number of program-suspend commands allowed, which results in one or more program-suspend commands being rescheduled (where the rescheduling is operable to remove one or more program-suspend command from the one or more instruction queue). The modifying of step 108 modifies the commands such that the modified commands do not exceed the first latency threshold. In one example, step 108 includes modifying the commands and then testing the modified commands in order to make sure that the modified commands do not exceed the first latency threshold.
In one example, in each iteration of step 201 the order of the commands in the one or more instruction queues is changed, erase-suspend commands in the one or more instruction queues are rescheduled (e.g., by changing the maximum allowed number of erase-suspend commands to reduce the number of erase-suspend commands in the one or more instruction queue 14), the program-suspend commands in the one or more instruction queues are rescheduled (e.g., by changing the maximum allowed number of program-suspend commands to reduce the number of erase-suspend commands in the one or more instruction queue), or the scheduling order for the one or more instruction queues is changed to a different scheduling order. In
A potential-command feature set is generated (202) corresponding to each of the potential-commands. In
A neural network operation is performed (203). The neural network operation uses as input the feature values of one of the potential-command feature sets to predict the latency for the corresponding potential-command. In
Predicted latency for the potential-command is compared (204) to a second latency threshold. The second latency threshold may be the same as the first latency threshold. However, a lower latency threshold than the first latency threshold can also be used. In
In
Optionally feature values corresponding to noisy patterns and associated values corresponding to reduced-noise-instruction-queue patterns are numeric values and/or alphanumeric values stored in a reduced-noise-pattern lookup table 16 (208). In one example, reduced-noise-pattern lookup table 16 includes feature values corresponding to some or all of the features of the noisy pattern identified in step 201 (e.g., a set of features corresponding to the noisy pattern) and a value corresponding to the reduced-noise-instruction-queue pattern identified in step 206 (e.g., a reduced-noise pattern index).
The commands in the instruction queue are modified or the scheduling order in the one or more instruction queues is changed (209) to correspond to the reduced-noise-instruction-queue pattern. In
A next command in the one or more instruction queues is performed (109). In the present example step 108 is performed prior to step 109. In
Steps 104-109 are optionally repeated (110) each time a subsequent command is received in the instruction queue. In the repeating of steps 104-109, step 108 is only repeated when the predicted latency exceeds the first latency threshold.
In one example the reduced-noise-pattern lookup table 16 stored in
In one example of method 300, reduced-noise-pattern lookup table 16 includes feature values corresponding to a noisy pattern (including: scheduling order, erase-suspend count, program-suspend count, erase count, program count, read count, command type, channel index, die index, plane index, current queue depth, write amplification factor and read/write ratio) and a corresponding reduced-noise-pattern index. The reduced-noise-pattern index indicates one or more of the following: that one or more erase-suspend commands are to be rescheduled (e.g., by indicating a erase-suspend command limit); that one or more program-suspend commands are to be rescheduled (e.g., by indicating a program-suspend command limit); a particular change to the order of the commands in one or more instruction queue; that scheduling order is to be changed to FIFO; that scheduling order is to be changed to LIFO; that scheduling order is to be changed to Read-First; that scheduling order is to be changed to Program-First; that scheduling order is to be changed to Erase-First; or that scheduling order is to be changed to Read-Program-Interleaving.
In this example, in step 209 the commands in the instruction queue are modified as indicated by the reduced-noise-pattern index. For example, when the reduced-noise-pattern index indicates that one or more erase-suspend command is to be rescheduled control circuit 9 is operable to change the erase-suspend command limit (reducing the maximum number of erase-suspend commands allowed) to reschedule the one or more erase-suspend command (e.g., thereby removing one or more erase-suspend commands from the one or more instruction queue). When the noisy-pattern index indicates that one or more program-suspend command is to be rescheduled control circuit 9 is operable to change the program-suspend command limit (reducing the maximum number of program-suspend commands allowed) to reschedule the one or more program-suspend command (e.g., thereby removing one or more program-suspend commands from the one or more instruction queue). When the noisy-pattern index indicates that scheduling order is to be changed to FIFO, control circuit 9 is operable to change the scheduling order to FIFO. When the noisy-pattern index indicates that scheduling order is to be changed to LIFO, control circuit 9 is operable to change the scheduling order to LIFO. When the noisy-pattern index indicates that scheduling order is to be changed to Read-First control circuit 9 is operable to change the scheduling order to Read-First. When the noisy-pattern index indicates that scheduling order is to be changed to Program-First control circuit 9 is operable to change the scheduling order to Program-First. When the noisy-pattern index indicates that scheduling order is to be changed to Erase-First control circuit 9 is operable to change the scheduling order to Erase-First. When the noisy-pattern index indicates that scheduling order is to be changed to Read-Program-Interleaving control circuit 9 is operable to change the scheduling order to Read-Program-Interleaving.
Flash controller 3 is operable to analyze every set of commands entered into the one or more instruction queues 14 and to identify, using neural network engine 10 and methods 100, 200 and 300 when the set of commands in instruction queue 14 will impact QoS (when the predicted latency will exceed the QoS latency threshold), and is operable to modify (step 108) one or more of the commands in instruction queue 14 before the next command in the instruction queue 14 is performed, preventing the pattern of commands in instruction queue 14 from being performed that would have impacted QoS.
In one example, only commands having a command type that significantly contributes to latency are considered and other types of commands are disregarded. By only considering those commands that significantly contribute to latency faster processing is achieved (since the types of commands that do not significantly contribute to latency need not be processed). In one example the user can specify what commands are considered.
In one example that is illustrated in
Input neurons 81 include input neurons relating to a first command (C1), which may be the current command, including the command type (Command Type—C1), channel index (Channel Index—C1), die index (Die Index—C1) and plane index (Plane Index C1) of the first command. Input neurons 81 include input neurons relating to a second command (C2), including the command type (Command Type-C2), channel index (Channel Index—C2), die index (Die Index—C2) and plane index (Plane Index—C2) of C2. Input neurons 81 include input neurons relating to a third command (C3), including the command type (Command Type-C3), channel index (Channel Index—C3), die index (Die Index—C3) and plane index (Plane Index—C3) of C3, and so forth, including input neurons 81 relating to a nth command (Cn), including the command type (Command Type-Cn), channel index (Channel Index—Cn), die index (Die Index—Cn) and plane index (Plane Index—Cn) of the nth command.
In one example n is equal to the total number of commands in the one or more instruction queues. Alternatively, n is equal to the total number of commands of the one or more command type that is monitored.
In one example, n is equal to the number of program-suspend and erase-suspend commands and QoS neural network 80 includes neurons corresponding to command type, channel index, die index and plane index for all program-suspend and erase-suspend commands in the one or more instruction queue. In another example, n is equal to the number of program-suspend, erase-suspend, program and erase commands and QoS neural network 80 includes neurons corresponding to command type, channel index, die index and plane index for all program-suspend, erase-suspend, program and erase commands in the one or more instruction queue. In another example, n is equal to the number of program-suspend, erase-suspend, program, erase and read commands and QoS neural network 80 includes neurons corresponding to command type, channel index, die index and plane index for all program-suspend, erase-suspend, program, erase and read commands in the one or more instruction queues. By removing, reordering or rescheduling commands in the one or more command queues before the next command is performed, the method and apparatus of the present invention prevents commands from being executed that have unacceptable latency. Thereby control of variable, probabilistic factors (to the extent those factors affect the commands in the command queue) is achieved. Moreover, by removing, reordering or rescheduling commands in command queue that have unacceptable latency, the QoS of the SSD is maintained within the predetermined tolerance requirements for the SSD over the lifetime of the SSD.
In the description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present invention. It will be evident, however, to one of ordinary skill in the art that the present invention may be practiced without these specific details. In some instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring the present invention. These embodiments are described in sufficient detail to enable those of ordinary skill in the art to practice the invention, and it is to be understood that other embodiments may be utilized and that logical, mechanical, electrical, and other changes may be made without departing from the scope of the present invention.
The present application claims priority to U.S. Provisional Patent Application Ser. No. 63/190,222 filed on May 18, 2021, the contents of which are incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
6119260 | Tomisawa et al. | Sep 2000 | A |
6567313 | Tanaka et al. | May 2003 | B2 |
6704871 | Kaplan et al. | Mar 2004 | B1 |
7529215 | Osterling et al. | May 2009 | B2 |
7650480 | Jiang | Jan 2010 | B2 |
7930623 | Pisek et al. | Apr 2011 | B2 |
8429325 | Onufryk et al. | Apr 2013 | B1 |
8621318 | Micheloni et al. | Dec 2013 | B1 |
8656257 | Micheloni et al. | Feb 2014 | B1 |
8665648 | Mun et al. | Mar 2014 | B2 |
8689074 | Tai | Apr 2014 | B1 |
8694849 | Micheloni et al. | Apr 2014 | B1 |
8694855 | Micheloni et al. | Apr 2014 | B1 |
8707122 | Micheloni et al. | Apr 2014 | B1 |
8769380 | Burd et al. | Jul 2014 | B1 |
8966335 | Lunelli et al. | Feb 2015 | B2 |
8971112 | Crippa et al. | Mar 2015 | B2 |
8984376 | Norrie | Mar 2015 | B1 |
8990661 | Micheloni et al. | Mar 2015 | B1 |
9021333 | Northcott | Apr 2015 | B1 |
9092353 | Micheloni et al. | Jul 2015 | B1 |
9128858 | Micheloni et al. | Sep 2015 | B1 |
9235467 | Micheloni et al. | Jan 2016 | B2 |
9251909 | Camp et al. | Feb 2016 | B1 |
9268531 | Woo et al. | Feb 2016 | B1 |
9292428 | Kanamori et al. | Mar 2016 | B2 |
9305661 | Micheloni et al. | Apr 2016 | B2 |
9397701 | Micheloni et al. | Jul 2016 | B1 |
9417804 | Micheloni et al. | Aug 2016 | B2 |
9444655 | Sverdlov et al. | Sep 2016 | B2 |
9448881 | Micheloni et al. | Sep 2016 | B1 |
9450610 | Micheloni et al. | Sep 2016 | B1 |
9454414 | Micheloni et al. | Sep 2016 | B2 |
9564922 | Graumann et al. | Feb 2017 | B1 |
9590656 | Micheloni et al. | Mar 2017 | B2 |
9747200 | Micheloni | Aug 2017 | B1 |
9799405 | Micheloni et al. | Oct 2017 | B1 |
9813080 | Micheloni et al. | Nov 2017 | B1 |
9886214 | Micheloni et al. | Feb 2018 | B2 |
9892794 | Micheloni et al. | Feb 2018 | B2 |
9899092 | Micheloni | Feb 2018 | B2 |
10152273 | Micheloni et al. | Dec 2018 | B2 |
10157677 | Marelli et al. | Dec 2018 | B2 |
10216422 | Kim et al. | Feb 2019 | B2 |
10230396 | Micheloni et al. | Mar 2019 | B1 |
10283215 | Marelli et al. | May 2019 | B2 |
10291263 | Marelli et al. | May 2019 | B2 |
10332613 | Micheloni et al. | Jun 2019 | B1 |
10490288 | Wang et al. | Nov 2019 | B1 |
10715307 | Jin | Jul 2020 | B1 |
10861562 | Xiong et al. | Dec 2020 | B1 |
11398291 | Zuolo et al. | Jul 2022 | B2 |
11514994 | Zuolo et al. | Nov 2022 | B1 |
20020144210 | Borkenhagen et al. | Oct 2002 | A1 |
20060106743 | Horvitz | May 2006 | A1 |
20060161830 | Yedidia et al. | Jul 2006 | A1 |
20060282603 | Onufryk et al. | Dec 2006 | A1 |
20070076873 | Yamamoto et al. | Apr 2007 | A1 |
20070157064 | Falik et al. | Jul 2007 | A1 |
20110231731 | Gross et al. | Sep 2011 | A1 |
20110255453 | Roh et al. | Oct 2011 | A1 |
20120166714 | Mun et al. | Jun 2012 | A1 |
20120287719 | Mun et al. | Nov 2012 | A1 |
20130343495 | Han et al. | Dec 2013 | A1 |
20140040697 | Loewenstein | Feb 2014 | A1 |
20140146605 | Yang | May 2014 | A1 |
20140237313 | Wang et al. | Aug 2014 | A1 |
20140281800 | Micheloni et al. | Sep 2014 | A1 |
20140281823 | Micheloni et al. | Sep 2014 | A1 |
20140310534 | Gurgi et al. | Oct 2014 | A1 |
20150033037 | Lidman | Jan 2015 | A1 |
20150049548 | Park et al. | Feb 2015 | A1 |
20150100860 | Lee et al. | Apr 2015 | A1 |
20160072527 | Suzuki et al. | Mar 2016 | A1 |
20160124679 | Huang et al. | May 2016 | A1 |
20160247581 | Suzuki et al. | Aug 2016 | A1 |
20160266791 | Lin et al. | Sep 2016 | A1 |
20160371014 | Roberts | Dec 2016 | A1 |
20170133107 | Ryan et al. | May 2017 | A1 |
20170149446 | Tao et al. | May 2017 | A1 |
20170213597 | Micheloni | Jul 2017 | A1 |
20170263311 | Cometti | Sep 2017 | A1 |
20180005670 | Lee et al. | Jan 2018 | A1 |
20180033490 | Marelli et al. | Feb 2018 | A1 |
20180046541 | Niu et al. | Feb 2018 | A1 |
20180314586 | Artieri et al. | Nov 2018 | A1 |
20190004734 | Kirshenbaum et al. | Jan 2019 | A1 |
20190073139 | Kim | Mar 2019 | A1 |
20190087119 | Oh et al. | Mar 2019 | A1 |
20190095794 | López et al. | Mar 2019 | A1 |
20190317901 | Kachare et al. | Oct 2019 | A1 |
20200004455 | Williams et al. | Jan 2020 | A1 |
20200066361 | Ioannou et al. | Feb 2020 | A1 |
20200074269 | Trygg et al. | Mar 2020 | A1 |
20200125955 | Klinger et al. | Apr 2020 | A1 |
20200151539 | Oh et al. | May 2020 | A1 |
20200183826 | Beaudoin et al. | Jun 2020 | A1 |
20200184245 | Huang | Jun 2020 | A1 |
20200185027 | Rom et al. | Jun 2020 | A1 |
20200210831 | Zhang et al. | Jul 2020 | A1 |
20210109673 | Kim | Apr 2021 | A1 |
20210192333 | Thiruvengadam et al. | Jun 2021 | A1 |
20210385012 | Buethe et al. | Dec 2021 | A1 |
20220027083 | Zuolo et al. | Jan 2022 | A1 |
20220050632 | Hong | Feb 2022 | A1 |
20220051730 | Choi et al. | Feb 2022 | A1 |
20220058488 | Zuolo et al. | Feb 2022 | A1 |
20220116056 | Kaynak et al. | Apr 2022 | A1 |
20220165348 | Zuolo et al. | May 2022 | A1 |
20220188604 | Zuolo et al. | Jun 2022 | A1 |
20220270698 | Zuolo et al. | Aug 2022 | A1 |
20220329262 | Liu et al. | Oct 2022 | A1 |
20220342582 | Graumann | Oct 2022 | A1 |
20220365845 | Buch et al. | Nov 2022 | A1 |
20220374169 | Zuolo et al. | Nov 2022 | A1 |
20220375532 | Zuolo et al. | Nov 2022 | A1 |
20220382629 | Graumann | Dec 2022 | A1 |
20220383970 | Zuolo et al. | Dec 2022 | A1 |
20220416812 | Wu | Dec 2022 | A1 |
20230087247 | Li et al. | Mar 2023 | A1 |
Number | Date | Country |
---|---|---|
2020117348 | Dec 2020 | WO |
Entry |
---|
Anonymous, “Training checkpoints | TensorFlow Core”, webpage, Dec. 28, 2019, XP055886114, p. 1-p. 8, URL:https://web.archive.org/web/2019122802 0311/https://www.tensorflow.org/guide/checkpoint. |
PCT/US2021/053250, International Search Report and Written Opinion, European Patent Office, dated Oct. 2, 2021. |
U.S. Appl. No. 17/089,891, filed Nov. 5, 2020, Lorenzo Zuolo. |
U.S. Appl. No. 17/148,200, filed Jan. 13, 2021, Lorenzo Zuolo. |
U.S. Appl. No. 17/213,675, filed Mar. 26, 2021, Lorenzo Zuolo. |
U.S. Appl. No. 17/234,993, filed Apr. 20, 2021, Lorenzo Zuolo. |
U.S. Appl. No. 17/347,388, filed Jun. 14, 2021, Lorenzo Zuolo. |
U.S. Appl. No. 17/385,857, filed Jul. 26, 2021, Lorenzo Zuolo. |
U.S. Appl. No. 17/398,091, filed Aug. 10, 2021, Lorenzo Zuolo. |
U.S. Appl. No. 17/506,735, filed Oct. 21, 2021, Lorenzo Zuolo. |
U.S. Appl. No. 63/057,278, filed Jul. 27, 2020, Lorenzo Zuolo. |
Noam Shazeer et al: Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts Layer11 , arxiv .org, Cornell University Li bra ry, 201 Olin Li bra ry Cornell University Ithaca, NY 14853, Jan. 23, 2017. |
Yu Cai et al, “Errors in Flash-Memory-Based Solid-State Drives: Analysis, Mitigation, and Recovery” 11 , arxiv. org, Cornell University Library, 201 Olin Library Cornell University Ithaca, NY 14853, Nov. 28, 2017. |
J. Mu et al., “The impact of faulty memory bit cells on the decoding of spatially-coupled LDPC codes,” 2015 49th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 2015, pp. 1627-1631, doi:10.1109/ACSSC .2015.7421423. (Year: 2015). |
Number | Date | Country | |
---|---|---|---|
20220374169 A1 | Nov 2022 | US |
Number | Date | Country | |
---|---|---|---|
63190222 | May 2021 | US |