Memory Device Based Accelerated Deep-Learning System

Information

  • Patent Application
  • 20230251792
  • Publication Number
    20230251792
  • Date Filed
    February 04, 2022
    2 years ago
  • Date Published
    August 10, 2023
    a year ago
Abstract
A data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure. The controller includes a NN command interpretation unit and a L2P mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.
Description
BACKGROUND OF THE DISCLOSURE
Field of the Disclosure

Embodiments of the present disclosure generally relate to data storage devices, such as solid state drives (SSDs), and, more specifically, utilizing deep learning training models stored in non-volatile memory to boost read and write performance of the data storage device.


Description of the Related Art

Deep learning (DL) systems are an escalating technology with capabilities in various fields. However, due to the increase of capabilities of DL systems, corresponding hardware resource consumption for the DL systems increase as well. Due to the size of the data sets and the DL models, DL systems may require very large capacities of fast memories. Such memories may be random access memory (RAM). However, non-volatile memories, such as NAND memory devices, may be interlaced in DL-hardware computations.


Typically, DL models are held at a dynamic RAM (DRAM) of the data storage device. As the size of the DL model increases, more DRAM may be required, thus, increasing the cost of the data storage device. However, non-volatile memories, such as NAND memory, may not be as cost intensive per capacity as DRAM. However, NAND memory may not be comparable in performance output to that of DRAM. For example, data sets may be in a size of about 100 GB or greater. Data sets are a collection of data samples and labels that are used to tune a DL model.


Therefore, there is a need in the art for an improved DL system using non-volatile memory for training of DL models.


SUMMARY OF THE DISCLOSURE

The present disclosure generally relates to data storage devices, such as solid state drives (SSDs), and, more specifically, utilizing deep learning training models stored in non-volatile memory to boost read and write performance of the data storage device. A data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure. The controller includes a NN command interpretation unit and a L2P mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.


In one embodiment, a data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure.


In another embodiment, a data storage device includes a memory and a controller coupled to the memory device. The controller includes a neural network (NN) command interpretation unit and a logical block address (LBA) to physical block address (PBA) (L2P) mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.


In another embodiment, a data storage device includes non-volatile memory means and a controller coupled to the non-volatile memory means. The controller is configured to store neural network (NN) parameters and one or more hyper parameter values in the non-volatile memory means, either perform a fully-autonomous deep learning (DL) training model or perform a semi-autonomous DL training model, and store data according to the performed DL training model.





BRIEF DESCRIPTION OF THE DRAWINGS

So that the manner in which the above recited features of the present disclosure can be understood in detail, a more particular description of the disclosure, briefly summarized above, may be had by reference to embodiments, some of which are illustrated in the appended drawings. It is to be noted, however, that the appended drawings illustrate only typical embodiments of this disclosure and are therefore not to be considered limiting of its scope, for the disclosure may admit to other equally effective embodiments.



FIG. 1 is a schematic block diagram illustrating a storage system in which a data storage device may function as a storage device for a host device, according to certain embodiments.



FIG. 2 is an exemplary illustration of a deep neural network, according to certain embodiments.



FIG. 3 is a schematic block diagram illustrating a LBA/PBA addressing system, according to certain embodiments.



FIG. 4 is a schematic block diagram illustrating a LBA/PBA addressing system, according to certain embodiments.



FIG. 5 is a flow diagram illustrating a method of a fully-autonomous data storage device operation during deep learning training, according to certain embodiments.



FIG. 6 is a flow diagram illustrating a method of a semi-autonomous data storage device operation during deep learning training, according to certain embodiments.





To facilitate understanding, identical reference numerals have been used, where possible, to designate identical elements that are common to the figures. It is contemplated that elements disclosed in one embodiment may be beneficially utilized on other embodiments without specific recitation.


DETAILED DESCRIPTION

In the following, reference is made to embodiments of the disclosure. However, it should be understood that the disclosure is not limited to specifically described embodiments. Instead, any combination of the following features and elements, whether related to different embodiments or not, is contemplated to implement and practice the disclosure. Furthermore, although embodiments of the disclosure may achieve advantages over other possible solutions and/or over the prior art, whether or not a particular advantage is achieved by a given embodiment is not limiting of the disclosure. Thus, the following aspects, features, embodiments, and advantages are merely illustrative and are not considered elements or limitations of the appended claims except where explicitly recited in a claim(s). Likewise, reference to “the disclosure” shall not be construed as a generalization of any inventive subject matter disclosed herein and shall not be considered to be an element or limitation of the appended claims except where explicitly recited in a claim(s).


The present disclosure generally relates to data storage devices, such as solid state drives (SSDs), and, more specifically, utilizing deep learning training models stored in non-volatile memory to boost read and write performance of the data storage device. A data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure. The controller includes a NN command interpretation unit and a L2P mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.



FIG. 1 is a schematic block diagram illustrating a storage system 100 in which a host device 104 is in communication with a data storage device 106, according to certain embodiments. For instance, the host device 104 may utilize a non-volatile memory (NVM) 110 included in data storage device 106 to store and retrieve data. The host device 104 comprises a host DRAM 138. In some examples, the storage system 100 may include a plurality of storage devices, such as the data storage device 106, which may operate as a storage array. For instance, the storage system 100 may include a plurality of data storage devices 106 configured as a redundant array of inexpensive/independent disks (RAID) that collectively function as a mass storage device for the host device 104.


The host device 104 may store and/or retrieve data to and/or from one or more storage devices, such as the data storage device 106. As illustrated in FIG. 1, the host device 104 may communicate with the data storage device 106 via an interface 114. The host device 104 may comprise any of a wide range of devices, including computer servers, network-attached storage (NAS) units, desktop computers, notebook (i.e., laptop) computers, tablet computers, set-top boxes, telephone handsets such as so-called “smart” phones, so-called “smart” pads, televisions, cameras, display devices, digital media players, video gaming consoles, video streaming device, or other devices capable of sending or receiving data from a data storage device.


The data storage device 106 includes a controller 108, NVM 110, a power supply 111, volatile memory 112, the interface 114, and a write buffer 116. In some examples, the data storage device 106 may include additional components not shown in FIG. 1 for the sake of clarity. For example, the data storage device 106 may include a printed circuit board (PCB) to which components of the data storage device 106 are mechanically attached and which includes electrically conductive traces that electrically interconnect components of the data storage device 106 or the like. In some examples, the physical dimensions and connector configurations of the data storage device 106 may conform to one or more standard form factors. Some example standard form factors include, but are not limited to, 3.5″ data storage device (e.g., an HDD or SSD), 2.5″ data storage device, 1.8″ data storage device, peripheral component interconnect (PCI), PCI-extended (PCI-X), PCI Express (PCIe) (e.g., PCIe x1, x4, x8, x16, PCIe Mini Card, MiniPCI, etc.). In some examples, the data storage device 106 may be directly coupled (e.g., directly soldered or plugged into a connector) to a motherboard of the host device 104.


Interface 114 may include one or both of a data bus for exchanging data with the host device 104 and a control bus for exchanging commands with the host device 104. Interface 114 may operate in accordance with any suitable protocol. For example, the interface 114 may operate in accordance with one or more of the following protocols: advanced technology attachment (ATA) (e.g., serial-ATA (SATA) and parallel-ATA (PATA)), Fibre Channel Protocol (FCP), small computer system interface (SCSI), serially attached SCSI (SAS), PCI, and PCIe, non-volatile memory express (NVMe), OpenCAPI, GenZ, Cache Coherent Interface Accelerator (CCIX), Open Channel SSD (OCSSD), or the like. Interface 114 (e.g., the data bus, the control bus, or both) is electrically connected to the controller 108, providing an electrical connection between the host device 104 and the controller 108, allowing data to be exchanged between the host device 104 and the controller 108. In some examples, the electrical connection of interface 114 may also permit the data storage device 106 to receive power from the host device 104. For example, as illustrated in FIG. 1, the power supply 111 may receive power from the host device 104 via interface 114.


The NVM 110 may include a plurality of memory devices or memory units. NVM 110 may be configured to store and/or retrieve data. For instance, a memory unit of NVM 110 may receive data and a message from controller 108 that instructs the memory unit to store the data. Similarly, the memory unit may receive a message from controller 108 that instructs the memory unit to retrieve data. In some examples, each of the memory units may be referred to as a die. In some examples, the NVM 110 may include a plurality of dies (i.e., a plurality of memory units). In some examples, each memory unit may be configured to store relatively large amounts of data (e.g., 128MB, 256MB, 512MB, 1GB, 2GB, 4GB, 8GB, 16GB, 32GB, 64GB, 128GB, 256GB, 512GB, 1TB, etc.).


In some examples, each memory unit may include any type of non-volatile memory devices, such as flash memory devices, phase-change memory (PCM) devices, resistive random-access memory (ReRAM) devices, magneto-resistive random-access memory (MRAM) devices, ferroelectric random-access memory (F-RAM), holographic memory devices, and any other type of non-volatile memory devices.


The NVM 110 may comprise a plurality of flash memory devices or memory units. NVM Flash memory devices may include NAND or NOR-based flash memory devices and may store data based on a charge contained in a floating gate of a transistor for each flash memory cell. In NVM flash memory devices, the flash memory device may be divided into a plurality of dies, where each die of the plurality of dies includes a plurality of physical or logical blocks, which may be further divided into a plurality of pages. Each block of the plurality of blocks within a particular memory device may include a plurality of NVM cells. Rows of NVM cells may be electrically connected using a word line to define a page of a plurality of pages. Respective cells in each of the plurality of pages may be electrically connected to respective bit lines. Furthermore, NVM flash memory devices may be 2D or 3D devices and may be single level cell (SLC), multi-level cell (MLC), triple level cell (TLC), or quad level cell (QLC). The controller 108 may write data to and read data from NVM flash memory devices at the page level and erase data from NVM flash memory devices at the block level.


The power supply 111 may provide power to one or more components of the data storage device 106. When operating in a standard mode, the power supply 111 may provide power to one or more components using power provided by an external device, such as the host device 104. For instance, the power supply 111 may provide power to the one or more components using power received from the host device 104 via interface 114. In some examples, the power supply 111 may include one or more power storage components configured to provide power to the one or more components when operating in a shutdown mode, such as where power ceases to be received from the external device. In this way, the power supply 111 may function as an onboard backup power source. Some examples of the one or more power storage components include, but are not limited to, capacitors, super-capacitors, batteries, and the like. In some examples, the amount of power that may be stored by the one or more power storage components may be a function of the cost and/or the size (e.g., area/volume) of the one or more power storage components. In other words, as the amount of power stored by the one or more power storage components increases, the cost and/or the size of the one or more power storage components also increases.


The volatile memory 112 may be used by controller 108 to store information. Volatile memory 112 may include one or more volatile memory devices. In some examples, controller 108 may use volatile memory 112 as a cache. For instance, controller 108 may store cached information in volatile memory 112 until the cached information is written to the NVM 110. As illustrated in FIG. 1, volatile memory 112 may consume power received from the power supply 111. Examples of volatile memory 112 include, but are not limited to, random-access memory (RAM), dynamic random access memory (DRAM), static RAM (SRAM), and synchronous dynamic RAM (SDRAM (e.g., DDR1, DDR2, DDR3, DDR3L, LPDDR3, DDR4, LPDDR4, and the like)).


Controller 108 may manage one or more operations of the data storage device 106. For instance, controller 108 may manage the reading of data from and/or the writing of data to the NVM 110. In some embodiments, when the data storage device 106 receives a write command from the host device 104, the controller 108 may initiate a data storage command to store data to the NVM 110 and monitor the progress of the data storage command. Controller 108 may determine at least one operational characteristic of the storage system 100 and store at least one operational characteristic in the NVM 110. In some embodiments, when the data storage device 106 receives a write command from the host device 104, the controller 108 temporarily stores the data associated with the write command in the internal memory or write buffer 116 before sending the data to the NVM 110.



FIG. 2 is an exemplary illustration of a deep neural network (DNN) 200, according to certain embodiments. The DNN 200 includes an input layer 202, a first hidden layer 204a, a second hidden layer 204b, a third hidden layer 204c, and an output layer 206. The number of hidden layers shown is not intended to be limiting, but to provide an example of a possible embodiment. Furthermore, each of the input layer 202, the first hidden layer 204a, the second hidden layer 204b, the third hidden layer 204c, and the output layer 206 includes a plurality of nodes. Each node of the input layer 202 may be an input node for data input. Each node of the first hidden layer 204a, the second hidden layer 204b, and the third hidden layer 204c combines input from the data with a set of coefficients or weights, that either amplify or dampen that input, thereby assigning significance to inputs with regard to the task the algorithm is trying to learn. The results of the third hidden layer 204c is passed to a node of the output layer 206.


A basic forward computation operation (e.g., feed forward) of a single node activation in the DNN 200 may be represented by the following equation: ajl=σ(Σkwjklakl−1+bjl). Multi-accumulate (MAC) operations are summed and an activation function is calculated, which may be a maximum (e.g., rectifier activation function or ReLU) or a sigmoid function. In other words, the forward computation operation is an activation sigmoid function applied to a sum over weights multiplied by input values to each neuron or node in the net plus a bias. The DNN 200 learning scheme is based on backpropagation equations used for updating neural network (NN) weights. The backpropagation equations are based on weighted sums using calculated delta terms given below in a matrix and vector form for the nodes of the output layer 206 and the nodes of the first hidden layer 204a, the second hidden layer 204b, and the third hidden layer 204c.










δ
L

=




a

C




σ


(

z
L

)






(
BP1
)













δ
l

=


(



(

w

l
+
1


)

T



δ

l
+
1



)




σ


(

z
l

)






(
BP2
)















C




b
j
l



=

δ
j
l






(
BP3
)















C




w

j

k

l



=


a
k

l
-
1




δ
j
l







(
BP4
)







The backpropagation equations (BP1, BP2, BP3, and BP4) show that there are fixed inputs (z) that are not changed and can be handled in static memory (e.g., NVM 110 of FIG. 1) and that there are adjustable values (C, δ, and w) that are adjusted or computed temporarily and may be handled in dynamic memory (e.g., DRAM). Another memory consuming element is the DL models themselves (i.e., the NN parameters, which may be the “weights” or C, δ, and w). As the capabilities of the DNN 200 increases, the size of the DL models increases as well. Although a fully-connected NN architecture is exemplified, it is to be understood that the embodiments described herein may be applicable to other NN architectures.



FIG. 3 is a schematic block diagram illustrating a logical block address (LBA)/physical block address (PBA) addressing system 300, according to certain embodiments. The LBA/PBA addressing system 300 includes a host device 302 coupled to a data storage device 308. The data storage device 308 is coupled to a NVM storage system that includes a plurality of NVMs 316a-316n. It is to be understood that the plurality of NVMs 316a-316n may be disposed in the data storage device 308. In some examples, the plurality of NVMs 316a-316n are NAND devices. The host device 302 includes a CPU/GPU unit 304 and a block based command generator unit 306. The block based command generator unit 306 generates commands to be programmed to blocks of a NVM of the plurality of NVMs 316a-316n. The host device 302 is aware of the LBA of where the data is stored and the data storage device 308 is aware of the PBA of where the data is stored in the plurality of NVMs 316a-316n.


The data storage device 308 includes a command interpretation unit 310, a block based flash translation layer (FTL) translation unit 312, and a flash interface unit 314, all of which may be disposed in a controller, such as the controller 108 of FIG. 1. The command interpretation unit 310 may be configured to receive or retrieve commands from the block based command generator unit 306. The command interpretation unit 310 may process the commands and generate the relevant control information for the processed commands. The commands are then passed to the block based FTL translation unit 312, where the commands are translated from LBA to PBA. The flash interface unit 314 passes the read/write commands to the relevant NVM of the plurality of NVMs 316a-316n based on the PBA. In other words, the translation layer between LBA and PBA is stored in the data storage device 308, such that each time a command is passed from the host device 302 to the data storage device 308, the corresponding PBA for the LBA associated with the command is extracted from the translation layer.



FIG. 4 is a schematic block diagram illustrating a LBA/PBA addressing system 400, according to certain embodiments. The LBA/PBA addressing system 400 includes a host device 402 coupled to a data storage device 408. The data storage device 408 is coupled to a NVM storage system that includes a plurality of NVMs 416a-416n. It is to be understood that the plurality of NVMs 416a-416n may be disposed in the data storage device 408. The host device 402 includes a CPU/GPU unit 404 and a NN interface command generator unit 406. The NN interface command generator unit 406 generates commands to be programmed to blocks of a NVM of the plurality of NVMs 416a-416n. In some examples, the plurality of NVMs 416a-416n are NAND devices. The commands may include the NN structure and one or more hyper parameter values. The NN structure and the one or more hyper parameter values are stored in one or more NVMs of the plurality of NVMs 416a-416n. The one or more hyper parameter values may define the training procedure of the DL model. The host device 402 is aware of the LBA of where the data is stored and the data storage device 408 is aware of the PBA of where the data is stored in the plurality of NVMs 416a-416n.


The data storage device 408 includes a NN interface command interpretation unit 410, a schedule based FTL translation unit 412, and a flash interface unit 414, all of which may be disposed in a controller, such as the controller 108 of FIG. 1. The NN interface command interpretation unit 410 may be configured to receive or retrieve commands from the NN interface command generator unit 406. The NN interface command interpretation unit 410 may process the commands and generate the relevant control information for the processed commands. In some embodiments, in order to reduce overhead and improved storage utilization for both dynamic parameters (e.g., “weights” and cost calculations) and static parameters, such as the data stored in an NVM of the plurality of NVMs 416a-416n, the data storage device may hold part or all of the NN structure and hyper parameter values.


The commands are then passed to the schedule based FTL translation unit 412, where the commands are translated from LBA to PBA based on a schedule (e.g., a DL model) that is passed to the data storage device 408 from the host device 402. The flash interface unit 414 passes the read/write commands to the relevant NVM of the plurality of NVMs 416a-416n based on the PBA. In other words, the translation layer between LBA and PBA is stored in the data storage device 408, such that each time a command is passed from the host device 402 to the data storage device 408, the corresponding PBA for the LBA associated with the command is extracted from the translation layer.



FIG. 5 is a flow diagram illustrating a method 500 of a fully-autonomous data storage device operation during deep learning training, according to certain embodiments. Method 500 may be implemented by the data storage device 408 of FIG. 4 or the controller 108 of FIG. 1. For exemplary purposes, aspects of the LBA/PBA addressing system 400 may be referenced herein. The fully-autonomous data storage device operation may omit the explicit transfer of NN parameters of specific read and write commands from the CPU/GPU unit 404 to the data storage device 408. In cases when the GPU is utilized in addition to the CPU, dual read/write direct storage access may be allowed between the GPU and the plurality of NVMs 416a-416n.


Rather, the data storage device 408 may hold the NN structure and the hyper parameter values. The NN interface command interpretation unit 410 may receive the NN structure and/or the hyper parameters values prior to the training process or choose the NN structure and/or the hyper parameter values stored in a static configuration (i.e., stored offline). Thus, the training process and the placement of data in buffers (i.e., placement of data into an NVM of the plurality of NVMs 416a-416n based on a L2P mapping) may be completed in a “fully-autonomous” manner, such as without the need for feedback from the host device 402.


At block 502, the host device 402 chooses a NN structure from a pre-defined configuration or passes the NN structure explicitly. The pre-defined configuration may be NN structures previously trained or default NN structures. At block 504, the host device 402 starts a training process by passing a data location through a dedicated interface. For example, the training process may be started by placing values or the data location in the nodes of the input layer 202 of FIG. 2. At block 506, the data storage device 408, or, more specifically, the controller 108, conducts reads and writes according to a pre-defined schedule. The pre-defined schedule may be the NN structure and/or hyper parameter values passed from the host device 402 to the data storage device 408 prior to the training process or held in the data storage device 408 in an offline location (e.g., an NVM of the plurality of NVMs 416a-416n). At block 508, the host device 402 conducts calculations by reading and placing data in the buffers directed to the data storage device 408.


Method 500 may implement either block 506 and block 508 independently or both block 506 and block 508 together. For example, the controller 108 may execute block 506 without executing block 508. In some examples, the results of block 506 may be passed to the host device 402 to implement in block 508 or and/or the results of block 508 may be passed to the data storage device 408 to implement in block 506. As the need for random reads and writes diminishes, data may be addressed in either a full block size or a partial block size. Thus, the NN parameters may be addressed in the pre-defined schedule via starting points and offsets. At block 510, the DL model training ends if a threshold number of iterations has been reached (i.e., the pre-defined training schedule ends) or by the host device 402 terminating the training process, such as due to the cost calculation remaining constant.


In an alternate addressing scheme, a key value (KV) pair interface may be used rather than a PBA to LBA mapping. Each data instance (e.g., value) may be addressed by using a key. NN parameters may be addressed in structures relating to iterations or parts of iterations. For example, all the NN parameters that belong to a first iteration (e.g., nodes 1-100 from a list of nodes greater than 100) may be addressed through a single key.


In order to reduce model overfitting (e.g., redundant calculations, unnecessary shifts, etc.), DL model training may use dropout. Dropout causes some of the nodes of one or hidden layers to be disabled in each iteration of the algorithm to improve the robustness of the DL model, thus, improving the performance of the algorithm. However, dropout introduces a measure of uncertainty. Because the network connections effectively change in each iteration, the NN parameters may be used differently. If the dropout can be applied before the training process, then the modified NN connections may already be reflected in the NN hyper parameters. For example, the controller 108 or the data storage device 408 may either apply the dropout to specific nodes by either parsing the NN structure iteration by iteration or by indicating which nodes should be skipped in each iteration. In some examples, the data storage device 408 or the controller 108 may randomize the nodes that are dropped out in each iteration according to a pre-defined randomization setting.



FIG. 6 is a flow diagram illustrating a method 600 of a semi-autonomous data storage device operation during deep learning training, according to certain embodiments. Method 600 may be implemented by the data storage device 408 of FIG. 4 or the controller 108 of FIG. 1. For exemplary purposes, aspects of the LBA/PBA addressing system 400 may be referenced herein. When the data storage device 408 is operating in the semi-autonomous mode, the CPU/GPU unit 404 may point out the NN parameters to read in each iteration. Thus, a challenge of synchronizing reads/writes may be decreased and treating dropouts may be reduced when storing data in the plurality of NVMs 416a-416n based on a L2P mapping.


The data storage device 408 or the controller 108 may utilize the unique characters of DL model training workload and update the NN parameters after each read and loss calculation in a pre-defined deterministic manner. Thus, the data storage device 408 or the controller 108 may update the “weights” by implementing write commands in a semi-autonomous manner. In other words, each update or write to the NN parameter or “weights” is completed to the same address as the previous read. Therefore, there may be no need to send specific write commands. Rather, the CPU/GPU unit 404 will transfer the list of NN parameter “weights” to update to the data storage device 408 after each iteration.


At block 602, the host device 402 chooses a NN structure from a pre-defined configuration or passes the NN structure explicitly for one iteration. The pre-defined configuration may be NN structures previously trained or default NN structures. At block 604, the host device 402 starts a training process by passing a data location through a dedicated interface. For example, the training process may be started by placing values or the data location in the nodes of the input layer 202 of FIG. 2. At block 606, the data storage device 408, or, more specifically, the controller 108, conducts reads and writes according to a pre-defined schedule for one training iteration. The pre-defined schedule may be the NN structure and/or hyper parameter values passed from the host device 402 to the data storage device 408 prior to the training process or held in the data storage device 408 in an offline location (e.g., an NVM of the plurality of NVMs 416a-416n). At block 608, the host device 402 conducts calculations by reading and placing data in the buffers directed to the data storage device 408.


Method 600 may implement either block 606 and block 608 independently or both block 606 and block 608 together. For example, the controller 108 may execute block 606 without executing block 608. In some examples, the results of block 606 may be passed to the host device 402 to implement in block 608 or and/or the results of block 608 may be passed to the data storage device 408 to implement in block 606. As the need for random reads and writes diminishes, data may be addressed in either a full block size or a partial block size. Thus, the NN parameters may be addressed in the pre-defined schedule via starting points and offsets. At block 610, the data storage device 408 or the controller 108 determines if the DL model training has ended. For example, if a threshold number of iterations has been reached (i.e., the pre-defined training schedule ends) or the host device 402 terminates the training process, such as due to the cost calculation remaining constant, the training has ended. If the training has not ended at block 610, then method 600 returns to block 602. However, if the training has ended at block 610, then method 600 ends at block 612.


By reducing the overhead of command transfer and interpretation between a host device running a machine learning application and flash memory of data storage device, power consumption may be reduced and throughput may be improved.


In one embodiment, a data storage device includes a memory and a controller coupled to the memory device. The controller is configured to be coupled to a host device. The controller is further configured to receive a plurality of commands, generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, and store data of the plurality of commands to a respective PBA according to the generated L2P mappings. Each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure.


The controller is further configured to receive the NN structure and one or more hyper parameter values and store the NN structure and the hyper parameter values in the memory device. The NN structure is received from a host device. The memory device is a non-volatile memory device. The one or more hyper parameter values defines a training procedure of the DL training model. The NN structure and the one or more hyper parameter values are provided to the DL training model at a beginning of the training procedure. The DL training model uses pre-defined hyper parameter values of one or more pre-defined parameter sets. The DL training model is updated after generating each of the L2P mappings. The controller is further configured to read weights according to the NN structure. The weights are updated after generating each of the L2P mappings. The controller is further configured to place the data of the plurality of commands in a specified buffer. The placing is completed without involvement of a host device.


In another embodiment, a data storage device includes a memory and a controller coupled to the memory device. The controller includes a neural network (NN) command interpretation unit and a logical block address (LBA) to physical block address (PBA) (L2P) mapping generator coupled to the NN command interpretation unit. The controller is configured to fetch training data and NN parameters from the memory device.


The NN command interpretation unit is configured to interface with a NN interface command generator disposed in a host device. The NN parameters are KV pair data. The training data and the NN parameters are utilized in a deep learning (DL) training model. One or more parts of the DL training model are disabled. The controller is configured to perform autonomous fetching of the training data and the NN parameters from the memory device. The controller is further configured to update one or more weights associated with a deep learning (DL) training model. The updating is to a same address as a previous read of the one or more weights.


In another embodiment, a data storage device includes non-volatile memory means and a controller coupled to the non-volatile memory means. The controller is configured to store neural network (NN) parameters and one or more hyper parameter values in the non-volatile memory means, either perform a fully-autonomous deep learning (DL) training model or perform a semi-autonomous DL training model, and store data according to the performed DL training model.


The non-volatile memory means is NAND-based memory means. The performing includes conducting reads and writes according to a pre-defined training schedule.


While the foregoing is directed to embodiments of the present disclosure, other and further embodiments of the disclosure may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims
  • 1. A data storage device, comprising: a memory device;a controller coupled to the memory device, wherein the controller is configured to be coupled to a host device, and wherein the controller is further configured to: receive a plurality of commands;generate logical block address (LBA) to physical block address (PBA) (L2P) mappings for each of the plurality of commands, wherein each of the L2P mappings are generated based on a result of a deep learning (DL) training model using a neural network (NN) structure; andstore data of the plurality of commands to a respective PBA according to the generated L2P mappings.
  • 2. The data storage device of claim 1, wherein the controller is further configured to: receive the NN structure and one or more hyper parameter values; andstore the NN structure and the hyper parameter values in the memory device.
  • 3. The data storage device of claim 2, wherein the NN structure is received from a host device.
  • 4. The data storage device of claim 2, wherein the memory device is a non-volatile memory device.
  • 5. The data storage device of claim 2, wherein the one or more hyper parameter values defines a training procedure of the DL training model.
  • 6. The data storage device of claim 5, wherein the NN structure and the one or more hyper parameter values are provided to the DL training model at a beginning of the training procedure.
  • 7. The data storage device of claim 5, wherein the DL training model uses pre-defined hyper parameter values of one or more pre-defined parameter sets.
  • 8. The data storage device of claim 1, wherein the DL training model is updated after generating each of the L2P mappings.
  • 9. The data storage device of claim 1, wherein the controller is further configured to read weights according to the NN structure, and wherein the weights are updated after generating each of the L2P mappings.
  • 10. The data storage device of claim 1, wherein the controller is further configured to place the data of the plurality of commands in a specified buffer, and wherein the placing is completed without involvement of a host device.
  • 11. A data storage device, comprising: a memory device;a controller coupled to the memory device, the controller comprising: a neural network (NN) command interpretation unit; anda logical block address (LBA) to physical block address (PBA) (L2P) mapping generator coupled to the NN command interpretation unit, wherein the controller is configured to fetch training data and NN parameters from the memory device.
  • 12. The data storage device of claim 11, wherein the NN command interpretation unit is configured to interface with a NN interface command generator disposed in a host device.
  • 13. The data storage device of claim 11, wherein the NN parameters are KV pair data.
  • 14. The data storage device of claim 11, wherein the training data and the NN parameters are utilized in a deep learning (DL) training model.
  • 15. The data storage device of claim 14, wherein one or more parts of the DL training model are disabled.
  • 16. The data storage device of claim 11, wherein the controller is configured to perform autonomous fetching of the training data and the NN parameters from the memory device.
  • 17. The data storage device of claim 11, wherein the controller is further configured to update one or more weights associated with a deep learning (DL) training model, and wherein the updating is to a same address as a previous read of the one or more weights.
  • 18. A data storage device, comprising: non-volatile memory means; anda controller coupled to the non-volatile memory means, the controller configured to: store neural network (NN) parameters and one or more hyper parameter values in the non-volatile memory means;either: perform a fully-autonomous deep learning (DL) training model; orperform a semi-autonomous DL training model; andstore data according to the performed DL training model.
  • 19. The data storage device of claim 18, wherein the non-volatile memory means is NAND-based memory means.
  • 20. The data storage device of claim 18, wherein the performing comprises conducting reads and writes according to a pre-defined training schedule.