Embodiments of the present disclosure generally relate to data storage devices, such as solid state drives (SSDs), and, more specifically, utilizing deep learning training models stored in non-volatile memory to boost read and write performance of the data storage device.
Deep learning (DL) systems are an escalating technology with capabilities in various fields. However, due to the increase of capabilities of DL systems, corresponding hardware resource consumption for the DL systems increase as well. Due to the size of the data sets and the DL models, DL systems may require very large capacities of fast memories. Such memories may be random access memory (RAM). However, non-volatile memories, such as NAND memory devices, may be interlaced in DL-hardware computations.
Typically, DL models are held at a dynamic RAM (DRAM) of the data storage device. As the size of the DL model increases, more DRAM may be required, thus, increasing the cost of the data storage device. However, non-volatile memories, such as NAND memory, may not be as cost intensive per capacity as DRAM. However, NAND memory may not be comparable in performance output to that of DRAM. For example, data sets may be in a size of about 100 GB or greater. Data sets are a collection of data samples and labels that are used to tune a DL model.
Therefore, there is a need in the art for an improved DL system using non-volatile memory for training of DL models.
The present disclosure generally relates to data storage devices, such as solid state drives (SSDs), and, more specifically, utilizing deep learning training models stored in non-volatile memory to boost read and write performance of the data storage device. A data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure. The controller includes a NN command interpretation unit and a L2P mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.
In one embodiment, a data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure.
In another embodiment, a data storage device includes a memory and a controller coupled to the memory device. The controller includes a neural network (NN) command interpretation unit and a logical block address (LBA) to physical block address (PBA) (L2P) mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.
In another embodiment, a data storage device includes non-volatile memory means and a controller coupled to the non-volatile memory means. The controller is configured to store neural network (NN) parameters and one or more hyper parameter values in the non-volatile memory means, either perform a fully-autonomous deep learning (DL) training model or perform a semi-autonomous DL training model, and store data according to the performed DL training model.
So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.
To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.
In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specifically described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments, and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).
The present disclosure generally relates to data storage devices, such as solid state drives (SSDs), and, more specifically, utilizing deep learning training models stored in non-volatile memory to boost read and write performance of the data storage device. A data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure. The controller includes a NN command interpretation unit and a L2P mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.
The host device 104 may store and/or retrieve data to and/or from one or more storage devices, such as the data storage device 106. As illustrated in
The data storage device 106 includes a controller 108, NVM 110, a power supply 111, volatile memory 112, the interface 114, and a write buffer 116. In some examples, the data storage device 106 may include additional components not shown in
Interface 114 may include one or both of a data bus for exchanging data with the host device 104 and a control bus for exchanging commands with the host device 104. Interface 114 may operate in accordance with any suitable protocol. For example, the interface 114 may operate in accordance with one or more of the following protocols: advanced technology attachment (ATA) (e.g., serial-ATA (SATA) and parallel-ATA (PATA)), Fibre Channel Protocol (FCP), small computer system interface (SCSI), serially attached SCSI (SAS), PCI, and PCIe, non-volatile memory express (NVMe), OpenCAPI, GenZ, Cache Coherent Interface Accelerator (CCIX), Open Channel SSD (OCSSD), or the like. Interface 114 (e.g., the data bus, the control bus, or both) is electrically connected to the controller 108, providing an electrical connection between the host device 104 and the controller 108, allowing data to be exchanged between the host device 104 and the controller 108. In some examples, the electrical connection of interface 114 may also permit the data storage device 106 to receive power from the host device 104. For example, as illustrated in
The NVM 110 may include a plurality of memory devices or memory units. NVM 110 may be configured to store and/or retrieve data. For instance, a memory unit of NVM 110 may receive data and a message from controller 108 that instructs the memory unit to store the data. Similarly, the memory unit may receive a message from controller 108 that instructs the memory unit to retrieve data. In some examples, each of the memory units may be referred to as a die. In some examples, the NVM 110 may include a plurality of dies (i.e., a plurality of memory units). In some examples, each memory unit may be configured to store relatively large amounts of data (e.g., 128MB, 256MB, 512MB, 1GB, 2GB, 4GB, 8GB, 16GB, 32GB, 64GB, 128GB, 256GB, 512GB, 1TB, etc.).
In some examples, each memory unit may include any type of non-volatile memory devices, such as flash memory devices, phase-change memory (PCM) devices, resistive random-access memory (ReRAM) devices, magneto-resistive random-access memory (MRAM) devices, ferroelectric random-access memory (F-RAM), holographic memory devices, and any other type of non-volatile memory devices.
The NVM 110 may comprise a plurality of flash memory devices or memory units. NVM Flash memory devices may include NAND or NOR-based flash memory devices and may store data based on a charge contained in a floating gate of a transistor for each flash memory cell. In NVM flash memory devices, the flash memory device may be divided into a plurality of dies, where each die of the plurality of dies includes a plurality of physical or logical blocks, which may be further divided into a plurality of pages. Each block of the plurality of blocks within a particular memory device may include a plurality of NVM cells. Rows of NVM cells may be electrically connected using a word line to define a page of a plurality of pages. Respective cells in each of the plurality of pages may be electrically connected to respective bit lines. Furthermore, NVM flash memory devices may be 2D or 3D devices and may be single level cell (SLC), multi-level cell (MLC), triple level cell (TLC), or quad level cell (QLC). The controller 108 may write data to and read data from NVM flash memory devices at the page level and erase data from NVM flash memory devices at the block level.
The power supply 111 may provide power to one or more components of the data storage device 106. When operating in a standard mode, the power supply 111 may provide power to one or more components using power provided by an external device, such as the host device 104. For instance, the power supply 111 may provide power to the one or more components using power received from the host device 104 via interface 114. In some examples, the power supply 111 may include one or more power storage components configured to provide power to the one or more components when operating in a shutdown mode, such as where power ceases to be received from the external device. In this way, the power supply 111 may function as an onboard backup power source. Some examples of the one or more power storage components include, but are not limited to, capacitors, super-capacitors, batteries, and the like. In some examples, the amount of power that may be stored by the one or more power storage components may be a function of the cost and/or the size (e.g., area/volume) of the one or more power storage components. In other words, as the amount of power stored by the one or more power storage components increases, the cost and/or the size of the one or more power storage components also increases.
The volatile memory 112 may be used by controller 108 to store information. Volatile memory 112 may include one or more volatile memory devices. In some examples, controller 108 may use volatile memory 112 as a cache. For instance, controller 108 may store cached information in volatile memory 112 until the cached information is written to the NVM 110. As illustrated in
Controller 108 may manage one or more operations of the data storage device 106. For instance, controller 108 may manage the reading of data from and/or the writing of data to the NVM 110. In some embodiments, when the data storage device 106 receives a write command from the host device 104, the controller 108 may initiate a data storage command to store data to the NVM 110 and monitor the progress of the data storage command. Controller 108 may determine at least one operational characteristic of the storage system 100 and store at least one operational characteristic in the NVM 110. In some embodiments, when the data storage device 106 receives a write command from the host device 104, the controller 108 temporarily stores the data associated with the write command in the internal memory or write buffer 116 before sending the data to the NVM 110.
A basic forward computation operation (e.g., feed forward) of a single node activation in the DNN 200 may be represented by the following equation: ajl=σ(Σkwjklakl−1+bjl). Multi-accumulate (MAC) operations are summed and an activation function is calculated, which may be a maximum (e.g., rectifier activation function or ReLU) or a sigmoid function. In other words, the forward computation operation is an activation sigmoid function applied to a sum over weights multiplied by input values to each neuron or node in the net plus a bias. The DNN 200 learning scheme is based on backpropagation equations used for updating neural network (NN) weights. The backpropagation equations are based on weighted sums using calculated delta terms given below in a matrix and vector form for the nodes of the output layer 206 and the nodes of the first hidden layer 204a, the second hidden layer 204b, and the third hidden layer 204c.
The backpropagation equations (BP1, BP2, BP3, and BP4) show that there are fixed inputs (z) that are not changed and can be handled in static memory (e.g., NVM 110 of
The data storage device 308 includes a command interpretation unit 310, a block based flash translation layer (FTL) translation unit 312, and a flash interface unit 314, all of which may be disposed in a controller, such as the controller 108 of
The data storage device 408 includes a NN interface command interpretation unit 410, a schedule based FTL translation unit 412, and a flash interface unit 414, all of which may be disposed in a controller, such as the controller 108 of
The commands are then passed to the schedule based FTL translation unit 412, where the commands are translated from LBA to PBA based on a schedule (e.g., a DL model) that is passed to the data storage device 408 from the host device 402. The flash interface unit 414 passes the read/write commands to the relevant NVM of the plurality of NVMs 416a-416n based on the PBA. In other words, the translation layer between LBA and PBA is stored in the data storage device 408, such that each time a command is passed from the host device 402 to the data storage device 408, the corresponding PBA for the LBA associated with the command is extracted from the translation layer.
Rather, the data storage device 408 may hold the NN structure and the hyper parameter values. The NN interface command interpretation unit 410 may receive the NN structure and/or the hyper parameters values prior to the training process or choose the NN structure and/or the hyper parameter values stored in a static configuration (i.e., stored offline). Thus, the training process and the placement of data in buffers (i.e., placement of data into an NVM of the plurality of NVMs 416a-416n based on a L2P mapping) may be completed in a “fully-autonomous” manner, such as without the need for feedback from the host device 402.
At block 502, the host device 402 chooses a NN structure from a pre-defined configuration or passes the NN structure explicitly. The pre-defined configuration may be NN structures previously trained or default NN structures. At block 504, the host device 402 starts a training process by passing a data location through a dedicated interface. For example, the training process may be started by placing values or the data location in the nodes of the input layer 202 of
Method 500 may implement either block 506 and block 508 independently or both block 506 and block 508 together. For example, the controller 108 may execute block 506 without executing block 508. In some examples, the results of block 506 may be passed to the host device 402 to implement in block 508 or and/or the results of block 508 may be passed to the data storage device 408 to implement in block 506. As the need for random reads and writes diminishes, data may be addressed in either a full block size or a partial block size. Thus, the NN parameters may be addressed in the pre-defined schedule via starting points and offsets. At block 510, the DL model training ends if a threshold number of iterations has been reached (i.e., the pre-defined training schedule ends) or by the host device 402 terminating the training process, such as due to the cost calculation remaining constant.
In an alternate addressing scheme, a key value (KV) pair interface may be used rather than a PBA to LBA mapping. Each data instance (e.g., value) may be addressed by using a key. NN parameters may be addressed in structures relating to iterations or parts of iterations. For example, all the NN parameters that belong to a first iteration (e.g., nodes 1-100 from a list of nodes greater than 100) may be addressed through a single key.
In order to reduce model overfitting (e.g., redundant calculations, unnecessary shifts, etc.), DL model training may use dropout. Dropout causes some of the nodes of one or hidden layers to be disabled in each iteration of the algorithm to improve the robustness of the DL model, thus, improving the performance of the algorithm. However, dropout introduces a measure of uncertainty. Because the network connections effectively change in each iteration, the NN parameters may be used differently. If the dropout can be applied before the training process, then the modified NN connections may already be reflected in the NN hyper parameters. For example, the controller 108 or the data storage device 408 may either apply the dropout to specific nodes by either parsing the NN structure iteration by iteration or by indicating which nodes should be skipped in each iteration. In some examples, the data storage device 408 or the controller 108 may randomize the nodes that are dropped out in each iteration according to a pre-defined randomization setting.
The data storage device 408 or the controller 108 may utilize the unique characters of DL model training workload and update the NN parameters after each read and loss calculation in a pre-defined deterministic manner. Thus, the data storage device 408 or the controller 108 may update the “weights” by implementing write commands in a semi-autonomous manner. In other words, each update or write to the NN parameter or “weights” is completed to the same address as the previous read. Therefore, there may be no need to send specific write commands. Rather, the CPU/GPU unit 404 will transfer the list of NN parameter “weights” to update to the data storage device 408 after each iteration.
At block 602, the host device 402 chooses a NN structure from a pre-defined configuration or passes the NN structure explicitly for one iteration. The pre-defined configuration may be NN structures previously trained or default NN structures. At block 604, the host device 402 starts a training process by passing a data location through a dedicated interface. For example, the training process may be started by placing values or the data location in the nodes of the input layer 202 of
Method 600 may implement either block 606 and block 608 independently or both block 606 and block 608 together. For example, the controller 108 may execute block 606 without executing block 608. In some examples, the results of block 606 may be passed to the host device 402 to implement in block 608 or and/or the results of block 608 may be passed to the data storage device 408 to implement in block 606. As the need for random reads and writes diminishes, data may be addressed in either a full block size or a partial block size. Thus, the NN parameters may be addressed in the pre-defined schedule via starting points and offsets. At block 610, the data storage device 408 or the controller 108 determines if the DL model training has ended. For example, if a threshold number of iterations has been reached (i.e., the pre-defined training schedule ends) or the host device 402 terminates the training process, such as due to the cost calculation remaining constant, the training has ended. If the training has not ended at block 610, then method 600 returns to block 602. However, if the training has ended at block 610, then method 600 ends at block 612.
By reducing the overhead of command transfer and interpretation between a host device running a machine learning application and flash memory of data storage device, power consumption may be reduced and throughput may be improved.
In one embodiment, a data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure.
The controller is further configured to receive the NN structure and one or more hyper parameter values and store the NN structure and the hyper parameter values in the memory device. The NN structure is received from a host device. The memory device is a non-volatile memory device. The one or more hyper parameter values defines a training procedure of the DL training model. The NN structure and the one or more hyper parameter values are provided to the DL training model at a beginning of the training procedure. The DL training model uses pre-defined hyper parameter values of one or more pre-defined parameter sets. The DL training model is updated after generating each of the L2P mappings. The controller is further configured to read weights according to the NN structure. The weights are updated after generating each of the L2P mappings. The controller is further configured to place the data of the plurality of commands in a specified buffer. The placing is completed without involvement of a host device.
In another embodiment, a data storage device includes a memory and a controller coupled to the memory device. The controller includes a neural network (NN) command interpretation unit and a logical block address (LBA) to physical block address (PBA) (L2P) mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.
The NN command interpretation unit is configured to interface with a NN interface command generator disposed in a host device. The NN parameters are KV pair data. The training data and the NN parameters are utilized in a deep learning (DL) training model. One or more parts of the DL training model are disabled. The controller is configured to perform autonomous fetching of the training data and the NN parameters from the memory device. The controller is further configured to update one or more weights associated with a deep learning (DL) training model. The updating is to a same address as a previous read of the one or more weights.
In another embodiment, a data storage device includes non-volatile memory means and a controller coupled to the non-volatile memory means. The controller is configured to store neural network (NN) parameters and one or more hyper parameter values in the non-volatile memory means, either perform a fully-autonomous deep learning (DL) training model or perform a semi-autonomous DL training model, and store data according to the performed DL training model.
The non-volatile memory means is NAND-based memory means. The performing includes conducting reads and writes according to a pre-defined training schedule.
While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.