This U.S. application claims priority under 35 U.S.C. § 119 to Korean Patent Application No. 10-2017-0135435 filed Oct. 18, 2017, in the Korean Intellectual Property Office, the disclosure of which is incorporated by reference herein in its entirety.
Embodiments of the inventive concept described herein relate to a semiconductor memory, and more particularly, relate to a storage device, a computing system including the storage device, and an operating method of the storage device.
A storage device may include a nonvolatile memory. Since a storage device with a nonvolatile memory retains data stored therein even at power-off, the storage device can be used to store data for a long time. The storage device may be used as a main memory in various electronic devices such as a personal computer or a smartphone.
A pattern of performing data reading and writing on the storage device may vary according to data usage patterns of users and environments in which data is used. The operating performance of the storage device may vary if the pattern of performing data reading and writing on the storage devices varies.
A manufacturer of the storage device can set an algorithm for internal operations (e.g., write and read operations) thereof based on average usage patterns and use environments. However, such an algorithm fails to provide optimal operating performance for each user.
At least one embodiments of the inventive concept provides a storage device that provides an optimum operating performance to each user, a computing system including the storage device, and an operating method of the storage device.
According to an exemplary embodiment of the inventive concept, a storage device includes a nonvolatile memory device having a plurality of first memory blocks to store a plurality of machine learning-based models and a plurality of second memory blocks configured to store user data, a controller that selects one of the machine learning-based models based on a model selection request, a processor that loads model data associate with the selected model and schedules a task associated with the nonvolatile memory device based on the selected model, and a memory interface that accesses the second memory blocks of the nonvolatile memory device based on the scheduled task.
According to an exemplary embodiment, a computing system includes a host device, and a storage device. The host device includes a first controller that generates host environment information associated with the computing system, and selects a model associated with the storage device based on machine learning depending on the host environment information and sends a model selection request indicating the selected model to the storage device. The storage device includes a nonvolatile memory including memory blocks, a second controller that selects one of a plurality of machine learning-based models depending on the model selection request, and a processor that executes the selected model to access the nonvolatile memory device.
According to an exemplary embodiment of the inventive concept, an operation method of a storage device includes a controller of the storage device selecting an operating model of the storage device generated from a first machine learning operation, the controller receiving a request to access the storage device from an external host device, the controller scheduling a task to be performed on the storage device based on a second machine learning operation that considers the access request and the selected operating model, and a processor of the storage device executing the scheduled task.
The inventive concept will become apparent by describing in detail exemplary embodiments thereof with reference to the accompanying drawings.
Below, exemplary embodiments of the inventive concept will be described clearly and in detail with reference to accompanying drawings to such an extent that one of an ordinary skill in the art can implement the inventive concept.
The host device 100 includes an information collection module 114 and a model classifier 160. The information collection module 114 may collect environment information of the host device 100. For example, the information collection module 114 may collect information such as a tendency of a user using the host device 100, an access pattern of an application or process executed in the host device 100, an access pattern of memory in the host device 100, or a work load of the host device 100, and may output the collected information as host environment information HEI. For example, the workload could indicate the number of instructions being executed by a processor of the host device 100 during a given period. For example, the access pattern of the application could indicate a sequence of processes that are executed during a given period. For example, the access pattern of the memory could indicate a sequence of memory locations that are accessed (read or written) during a given period. In an embodiment, the information collection module 114 is a computer program that is executed by a processor of the host device 100. In another embodiment, the information collection module 114 is implemented by a controller including one or more hardware counters, logic circuits, and registers. For example, a logic circuit can control a hardware counter to count the number of instructions executed by the processor during a given period to generate a workload value. For example, a logic circuit can set registers to indicate the sequence of applications or memory locations accessed during a given period to generate an access pattern.
The model classifier 160 selects a model in which the host device 100 accesses the storage device 200, based on the host environment information HEI. For example, the model classifier 160 may select one of predefined models based on the host environment information HEI. In an embodiment, the model classifier 160 includes a classifier created as a result of machine learning and selects a model by using the classifier. The model classifier 160 sends a model selection request MSR indicating the selected model to the storage device 200.
The storage device 200 includes a model selection module 234, a model execution processor 240 (e.g., a central processing unit, a digital signal processor, etc.), and a nonvolatile memory device 280. The model selection module 234 selects a model depending on the model selection request MSR. In an embodiment, the host device 100 sends a signal to the storage device 200 including the model selection request MSR. The model selection module 234 may load model data MOD of the selected model on the model execution processor 240. In an embodiment, the model selection module 234 is implemented as a computer program that is executed by a processor of the storage device 234. In an embodiment, the model selection module 234 is implemented by a logic circuit, a memory or registers storing the model data MOD of each of a plurality of models, and a multiplexer input with signals indicative of the models, where the model selection request MSR is applied as a control signal to the multiplexer to cause output of one of the signals to the logic circuit, and the logic circuit loads the corresponding model data MOD onto the model execution processor 240.
The model execution processor 240 receives the model data MOD from the model selection module 234. The model execution processor 240 executes the selected model by executing the model data MOD. For example, the model execution processor 240 may schedule tasks corresponding to an access request from the host device 100, and background tasks or foreground tasks for managing the storage device 200, based on the selected model. A background task is a computer process that runs behind the in the background without user intervention (e.g., logging, system monitoring, scheduling, user notification, etc.).
For example, the model data MOD may include a classifier created as a result of machine learning. The model execution processor 240 may schedule various tasks based on the machine learning and priorities or weights assigned by the selected model. In an embodiment, the classifier is an executable program that is configured to classify data into one of a plurality of different classes.
The nonvolatile memory device 280 may be accessed according to the tasks scheduled by the model execution processor 240. The nonvolatile memory device 280 may include at least one of various nonvolatile memory devices such as a flash memory device, a phase change random access memory (PRAM) device, a ferroelectric random access memory (FRAM) device, or a resistive random access memory (RRAM) device.
In operation S120, the model classifier 160 of the host device 100 sends the model selection request MSR indicating the selected model to the storage device 200. In operation S130, the model selection module 234 of the storage device 200 loads model data MOD of the selected model on the model execution processor 240.
If the model is completely selected (changed), in operation S140, the host device 100 sends access requests to the storage device 200. For example, the access requests may include a write request, a read request, or a trim request. The trim request may allow the host device 100 to inform the storage device 200 of information of a deleted file. In an embodiment, the trim request informs the storage device 200 of which blocks of the storage device 200 are no longer considered in use so they can be deleted internally.
In operation S150, the model execution processor 240 of the storage device 200 executes the access requests depending on the selected model. For example, the model data MOD may include a task classifier that is based on machine learning. The model execution processor 240 may perform task classification based on the access requests from the host device 100 or pieces of environment information of the storage device 200. The model execution processor 240 may schedule an access task (e.g., a task that writes data to memory or reads data from memory), a background task, or a foreground task associated with the nonvolatile memory device 280 based on the classification result.
The processor 110 includes an application module 111 and the information collection module 114. The application module 111 may execute various applications 112 and processes 113 that use the storage device 200. The application module 111 may be automatically driven according to a request of a user or an internal schedule of the processor 110 and need not be associated with selection of a model.
The applications 112 or the processes 113 generate a first access request AR1 for accessing the storage device 200. For example, the first access request AR1 may include a read, write, or trim request for the storage device 200. The first access request AR1 may be transmitted to the device interface 170.
The information collection module 114 performs tasks associated with selection of a model. The information collection module 114 includes a power monitor 115, a user configuration register 116, an input and output monitor 117, and an application and process monitor 118. The power monitor 115, the user configuration register 116, the input and output monitor 117, and the application and process monitor 118 respectively collect first host environment information HEI1, second host environment information HEI2, third host environment information HEI3, and fourth host environment information HEI4 and provide the model classifier 160 with the first to fourth host environment information HET1 to HEI4 as the host environment information HEI.
The power monitor 115 collects information associated with power of the host device 100 from the power module 120, as the first host environment information HEI1. The following table 1 shows an example of the first host environment information HET1 of the host device 100, which the power monitor 115 collects. For example, the power monitor 115 could retrieve one or more rows of the table 1 from the power module 120. The Power type in table 1 indicates whether the host device 100 is powered by AC or DC power. The Battery level in table 1 indicates a charge percentage of a battery providing power to the host device 100. The Count in the table 1 indicates the number of times that the host device 100 enters the lower-power mode during a previous time period. For example, if the Count is 2, the host device 100 has entered the low-power mode two times during the previous N seconds. The Duration in the table 1 indicates an amount of time the host device 100 maintained the low-power mode, during a previous period. For example, if the Duration is 3 and the units of the Duration are seconds, the host device 100 maintained the low-power mode for 3 seconds.
The user configuration register 116 stores information configured by the user. The user configuration register 116 may store preset operation preferences. The user configuration register 116 may further store information about an operation tendency preferred by the user among stored operation tendencies. The user configuration register 116 may provide information about operation tendencies as the second host environment information HEI2. The following table 2 shows an example of the second host environment information HEI2 collected by the user configuration register 116. For example, the Environment type in table 2 may indicate the type of a device that the storage device belongs (e.g., the host device 100). The Performance-centered parameter in table 2 may be a measure of how interested the user is in performance (e.g., operating at a high operating frequency). The Low power-centered parameter of table 1 may indicate how interested the user is in saving power. The Reliability-centered parameter in table 2 may indicate how interested the user is in reliable data. For example, if the value of the parameter is high, the host device 100 could store redundant copies of data and/or store data with error correcting codes on the storage device 200.
The input and output monitor 117 monitors a characteristic in which the host device 100 accesses the storage device 200 and provides the monitoring result as the third host environment information HEI3. For example, the input and output monitor 117 may monitor a characteristic in which the host device 100 accesses the storage device 200 during a previous “N” seconds (N being a positive integer). The following table 3 shows an example of information that the input and output monitor 117 collects as the third host environment information HEI3. The Idle time parameter of table 3 may indicate the last amount of time the host device 100 was idle (e.g., did not access the storage device 200). The Read load parameter of table 3 may indicate a percentage of the read bandwidth used by the host device 100 during a previous period. The Write load parameter of table 3 may indicate a percentage of the write bandwidth used by the host device 100 during a previous period. The Expected read load parameter of table 3 may indicate a percentage of the read bandwidth that the host device 100 is expected to use during a next period. The Expected write load parameter of table 3 may indicate a percentage of the write bandwidth that the host device 100 is expected to use during a next period. The Number of opened file handlers parameter of table 3 may indicate the number of file handlers or file descriptors opened to open files stored on the storage device 200.
The application and process monitor 118 may collect information about access patterns of the applications 112 or the processes 113 executed in the processor 110 from the input and output pattern database 150. The application and process monitor 118 may provide the collected information as the fourth host environment information HEI4. The following table 4 shows an example of the fourth host environment information HEI4 collected by the application and process monitor 118.
For example, an application having a name of “word processor” may execute a process having a name of “WORD.exe”. As an example, an average read load of the word processor may have a value of “10” between “0” and “100”. The average read load may indicate how read intensive the application is during a given period. An average write load of the word processor may have a value of “30” between “0” and “100”. The average write load may indicate how write intensive the application is during a given period. A sequential read ratio of the word processor may have a value of “70” between “0” and “100”. A sequential write ratio of the word processor may have a value of “10” between “0” and “100”.
For example, an application having a name of “media player” may execute a process having a name of “PLAYER.exe”. An average read load of the media player may be “70”, an average write load thereof may be “10”, a sequential read ratio thereof may be “90”, and a sequential write ratio thereof may be “80”. An application having a name of “file sharing” may execute a process having a name of “TORRENT.exe”. An average read load of the file sharing may be “50”, an average write load thereof may be “80”, a sequential read ratio thereof may be “90”, and a sequential write ratio thereof may be “90”.
An application having a name of “explorer” may execute a process having a name of “EXPLORER”. An average read load of the explorer may be “20”, an average write load thereof may be “10”, a sequential read ratio thereof may be “40”, and a sequential write ratio thereof may be “10”.
In the case where pattern information of a specific application/process among the applications 112 or the processes 113 is not stored in the input and output pattern database 150, the processor 110 may request the pattern information of the specific application/process from a server through the modem 130. In the case where pattern information is absent from a server, the input and output monitor 117 may analyze a pattern in which the specific application/process accesses the storage device 200.
The input and output monitor 117 may store the analysis result in the input and output pattern database 150 as pattern information of the specific application/process. For example, the processor 110 may upload the pattern information of the specific application/process to a server through the modem 130.
The power module 120 may supply power to the host device 100. The power module 120 may include a battery or a rectifier as an example. The modem 130 may communicate with an external device through wired or wireless communication under control of the processor 110. The memory 140 may be a working memory or a system memory of the processor 110. The memory 140 may include a dynamic random access memory (DRAM).
The input and output pattern database 150 may include information about patterns of an input and output through which applications or processes access the storage device 200. The input and output pattern database 150 may be stored in a nonvolatile memory. For example, the input and output pattern database 150 may be stored in the nonvolatile memory device 280 of the storage device 200.
For example, the processor 110 may send an access request to the storage device 200 through the device interface 170, thus access (or search for) the input and output pattern database 150 stored in the nonvolatile memory device 280 of the storage device 200.
As another example, the input and output pattern database 150 may be stored in a remote server. In this example, the input and output pattern database 150 is not locally stored in the computing system 10. The processor 110 may send an access request to the remote server through the modem 130, thus accessing the input and output pattern database 150 stored in the remote server.
The input and output pattern database 150 may store information about patterns in which applications or processes installed or not installed in the host device 100 or processes previously executed or not executed in the host device 100 that access the storage device 200. For example, the input and output pattern database 150 could indicate a sequence of applications or processes that has accessed the storage device 200 during a given period or a series of data that has accessed in the storage device 200.
In an embodiment, the model classifier 160 is created using machine learning. In an embodiment, the model classifier 160 receives the host environment information HEI and classifies a model suitable for the storage device 200 based on the host environment information HEI. The model classifier 160 sends a first model selection request MSR1 including the information of the classified models to the device interface 170. The following table 5 shows an example of a classification result of the model classifier 160.
As shown in table 5, the model classifier 160 may classify probability of performance centered and balance the highest. The model classifier 160 may send the first model selection request MSR1 to the device interface 170 so as to select a model corresponding to the performance centered and balance. For example, a balanced expected load may mean that the number of reads is expected to be equal to the number of writes or that the amount of resources used to perform the reads is expected to be the same as the amount of resources used to perform the writes.
As shown in table 5, the model classifier 160 may perform classification by using a tendency of a user stored in the user configuration register 116 as a first element (or a main element). The model classifier 160 may perform classification by using the remaining environment information other than the tendency of the user as a second element (or an additional element).
For example, the model classifier 160 may be implemented with a separate semiconductor chip that is separated from the processor 110. For example, the model classifier 160 may be implemented with a semiconductor chip, which is suitable to perform machine learning-based classification, such as a field programmable gate array (FPGA), a graphics processor (GPU), or a neuromorphic chip.
The device interface 170 receives the first access request AR1 from the processor 110 and receives the first model selection request MSR1 from the model classifier 160. The device interface 170 may receive a signal from the model classifier 160 including the first model selection request AR1. The device interface 170 may translate the first access request AR1 and the first model selection request MSR1 having a format used within the host device 100 to a second access request AR2 and a second model selection request MSR2 having a format used in communication between the host device 100 and the storage device 200, respectively.
The device interface 170 sends the second access request AR2 and the second model selection request MSR2 to the storage device 200. When the second access request AR2 is a read request, the device interface 170 reads data corresponding to the read request from the memory 140 and sends the read data to the storage device 200. When the second access request AR2 is a write request, the device interface 170 stores data transmitted from the storage device 200 in the memory 140.
As described above, the host device 100 may perform machine learning-based model classification based on the tendency of the user and the environment of the host device 100. The host device 100 may send (e.g., direct, propose, or recommend) a model, which is classified as being suitable for the storage device 200, to the storage device 200 through the second model selection request MSR2.
In operation S220, the model classifier 160 determines whether a probability of a selected model (e.g., a model having the highest probability) is greater than a threshold. For example, the threshold may be set by the model classifier 160. As another example, the threshold may be set by the user or the external device. The threshold may be stored in the user configuration register 116.
If the probability of the selected model is greater than the threshold, the model classifier 160 generates the first model selection request MSR1. That is, on the basis of the selected model, the model classifier 160 maintains a previous model or requests selection of a new model through the first model selection request MSR1. For example, in the case where a previous model is maintained, the model classifier 160 may skip generation of the first model selection request MSR1.
In an embodiment, if the probability of the selected model is not greater than the threshold, the model classifier 160 does not generate the first model selection request MSR1. That is, a model that was previously selected and executed in the storage device 200 is maintained. According to the embodiment described with reference to
Referring to
The host interface 210 may receive the second access request AR2 and the second model selection request MSR2 from the device interface 170. The host interface 210 may translate the second access request AR2 and the second model selection request MSR2 having a format used in communication between the host device 100 and the storage device 200 to a third access request AR3 and a third model selection request MSR3 having a format used within the storage device 200, respectively.
The host interface 210 sends the third access request AR3 to the first buffer memory 220. The host interface 210 may send a signal including the third access request AR3 to the first buffer memory 220. The host interface 210 sends the third model selection request MSR3 to the controller 230. The host interface 210 may send a signal including the third model selection request MSR3 to the controller 230. When the third access request AR3 is a write request, the host interface 210 stores data received from the device interface 170 in the third buffer memory 260. After data is stored in the third buffer memory 260 depending on the write request, the host interface 210 sends the data stored in the third buffer memory 260 to the memory interface 270.
The first buffer memory 220 receives the third access request AR3 from the host interface 210. For example, the first buffer memory 220 may be a queue to store third access requests. The first buffer memory 220 may be provided within the controller 230, may be provided within the model execution processor 240, or may be combined with the second buffer memory 250 or the third buffer memory 260.
The controller 230 may execute firmware for driving the storage device 200. The controller 230 may control components of the storage device 200 depending on the firmware. The controller 230 may receive the third model selection request MSR3 from the host interface 210. The controller 230 may select a model based on the third model selection request MSR3 and may load the selected model on the model execution processor 240.
The controller 230 includes a device information collection module 231 and the model selection module 234. The device information collection module 231 collects environment information of the storage device 200 and provides the collected environment information to the model execution processor 240 as device environment information DEI. The controller 230 may output a signal including the device environment information DEI to the model execution processor 240.
The device information collection module 231 may include a resource monitor 232 and an event monitor 233. The resource monitor 232 may monitor resources of the storage device 200 and may include information about the resources in the device environment information DEI. The following table 6 shows an example of the device environment information DEI that the resource monitor 232 provides. The single level cell free memory block parameter of table 6 may indicate a percentage of the memory blocks of the storage 200 that are free and whose memory cell is configured to store only a single bit. The multi-level cell free memory block parameter of table 6 may indicate a percentage of the memory blocks of the storage 200 that are free and whose memory cell is configured to store two bits. The triple level cell free memory block parameter of table 6 may indicate a percentage of the memory blocks of the storage 200 that are free and whose memory cell is configured to three or more bits. The valid data ratio parameters of table 6 may indicate a percentage of the memory cells of a certain type that have valid data. The max/mix program/erase count parameters of the table 6 may indicate a maximum/minimum number of program/erase operations performed in the memory cells of a certain type. The Residual ratio of buffer memory parameter may indicate how full or empty the buffer memories are.
A memory cell storing 1-bit data may be a single level cell. A memory block including single level cells may be a single level cell memory block. A memory block including single level cells where data are not stored may be a free memory block of single level cells.
A memory cell storing 2-bit data may be a multi-level cell. A memory cell storing 3-bit data may be a triple level cell. However, the inventive concept is not limited to single level cells, multi-level cells, and triple level cells. For example, the inventive concept may also be applied to a case where four or more data bits are stored in one memory cell.
The event monitor 233 may monitor events occurring in the storage device 200 and may include information about the events in the device environment information DEI. The following table 7 shows an example of the device environment information DEI that the event monitor 233 collects.
Read requests, write requests, or trim requests of the host device 100 pending may include requests stored in the first buffer memory 220 or tasks issued by the host device 100 and scheduled in the second buffer memory 250 through classification of the model execution processor 240.
The model selection module 234 may select a model depending on the third model selection request MSR3. The model selection module 234 includes a model selector 235 and a model downloader 236. The model selector 235 sends a first model request MR1 requesting a model selected by the third model selection request MSR3 to the memory interface 270. The model selector 235 may send a signal to the memory interface 270 including the first model request MR1.
If model data MOD of the selected model is stored in the nonvolatile memory device 280, the memory interface 270 may read the model data MOD from the nonvolatile memory device 280. If the model data MOD is transmitted from the memory interface 270, the model selector 235 may send (or load) the model data MOD to the model execution processor 240.
In the case where the model data MOD of the selected model is not transmitted from the memory interface 270, for example, in the case where the model data MOD of the selected model is not stored in the nonvolatile memory device 280, the model downloader 236 may request the model data MOD from the host device 100. The model data MOD may be stored in the nonvolatile memory device 280 through the third buffer memory 260 as a part of data.
The model execution processor 240 may receive and execute the model data MOD of the selected model from the controller 230. In an embodiment, the selected model includes a task classifier 241 and a policy classifier 242. In an embodiment, the task classifier 241 performs machine learning-based classification on the device environment information DEI. The following table 8 shows an example of tasks classified by the task classifier 241.
For example, the task classifier 241 may select a task, the probability of which is greater than a threshold, or two or more tasks as described with reference to
The policy classifier 242 may classify a policy corresponding to a selected task depending on the device environment information DEI and the selected task (e.g., a scheduled task). The following table 9 shows an example of policies classified by the policy classifier 242.
When a selected task is associated with a write operation, one of policies “0” to “2” may be selected. When a selected task is associated with a data transfer, one of policies “3” to “5” may be selected. When a selected task is associated with wear-leveling management, one of policies “6” to “8” may be selected. When a selected task is associated with garbage collection, one of policies “9” to “11” may be selected.
When a selected task is associated with data scrubbing, one of policies “12” to “14” may be selected. In an embodiment, the data scrubbing uses a background task to periodically inspect the storage device 200 for errors, then corrects detected errors using redundant data. That is, with regard to the selected task, a single level cell memory block, a multi-level cell memory block, or a triple level cell memory block may be selected as a task target. For example, when two or more tasks are selected, two or more policies corresponding to the selected tasks may be selected.
The model execution processor 240 may fetch a request corresponding to a selected task among requests stored in the first buffer memory 220. For example, the model execution processor 240 may fetch a request corresponding to a selected task from the first buffer memory 220 in a first-in first-out manner or depending on a priority assigned to requests 221.
The model execution processor 240 may schedule the fetched request as a task in the second buffer memory 250. The model execution processor 240 may schedule the selected policy in the second buffer memory 250. For example, tasks 251 may have corresponding policies 252, respectively. As another example, some of tasks may have corresponding policies while the remaining tasks do not have corresponding policies.
The model execution processor 240 may be implemented with a separate semiconductor chip that is separated from the controller 230. For example, the model execution processor 240 may be implemented with a semiconductor chip, which is suitable to perform machine learning-based classification, such as a field programmable gate array (FPGA), a graphics processor (GPU), or a neuromorphic chip.
The second buffer memory 250 may store the tasks 251 and the policies 252 scheduled by the model execution processor 240. For example, the second buffer memory 250 may be a queue to store the tasks 251 and the policies 252. The second buffer memory 250 may be provided within the controller 230, may be provided within the model execution processor 240, or may be combined with the first buffer memory 220 or the third buffer memory 260.
The third buffer memory 260 may be a system memory or a main memory used to drive firmware of the controller 230. The third buffer memory 260 may be used to buffer data conveyed between the host device 100 and the nonvolatile memory device 280. The third buffer memory 260 may be included within the controller 230 or the model execution processor 240. The third buffer memory 260 may be combined with the first buffer memory 220 or the second buffer memory 250.
The memory interface 270 receives the first model request MR1 from the controller 230. The memory interface 270 may translate the first model request MR1 to the second model request MR2 having a format used in communication between the memory interface 270 and the nonvolatile memory device 280. The memory interface 270 sends the second model request MR2 to the nonvolatile memory device 280. The memory interface 270 may send a signal including the second model request MR2 to the nonvolatile memory device 280.
The nonvolatile memory device 280 may output the model data MOD in response to the second model request MR2. The memory interface 270 may send the model data MOD to the controller 230. As another example, the memory interface 270 may store the model data MOD in the third buffer memory 260. The model data MOD stored in the third buffer memory 260 may be transmitted (or loaded) to the model execution processor 240.
The memory interface 270 may fetch a task and a policy as a fourth access request AR4 from the second buffer memory 250. For example, the memory interface 270 may fetch a task and a policy in a first-in first-out manner or based on an assigned weight.
The memory interface 270 may translate the fetched fourth access request AR4 to a fifth access request AR5 having a format used in communication between the memory interface 270 and the nonvolatile memory device 280. The memory interface 270 sends the fifth access request AR5 to the nonvolatile memory device 280.
When the fifth access request ARS is a write request, the memory interface 270 sends data stored in the third buffer memory 260 to the nonvolatile memory device 280. When the fifth access request ARS is a read request, the memory interface 270 stores data transmitted from the nonvolatile memory device 280 in the third buffer memory 260.
The nonvolatile memory device 280 includes memory blocks 281. Each of the memory blocks 281 includes memory cells. The memory cells may include single level cells, multi-level cells, and/or triple level cells. A model database 282 may be stored in a first part of the memory blocks 281. The model database 282 may include task models 283 and policy models 284. User data may be stored in another part of the memory blocks 281. For example, first memory blocks may be used to store the model database 282 and second other memory blocks may be used to store user data.
The nonvolatile memory device 280 may perform a write, read, or erase operation on memory blocks in response to the fifth access request ARS. The nonvolatile memory device 280 may exchange data with the memory interface 270 in a write or read operation. In response to the second model request MR2, the nonvolatile memory device 280 may provide the memory interface 270 with one of task models 283 or one of policy models 284 stored in the model database 282 as the model data MOD.
As described above, the storage device 200 may select and execute a model in response to a request of the host device 100. The storage device 200 may schedule requests transmitted from the host device 100, background operations, or foreground operations depending on the selected model. Since scheduling of tasks is performed on the basis of the machine learning and since a machine learning model is changed according to a preference of the user and an environment, performance, power consumption, and reliability are optimized according to preferences of users. Thus, embodiments of the inventive concept are advantageous over conventional storage devices that schedule tasks based on a preset strategy resulting in sub-optimal performance, power consumption, and reliability. Further, when the storage device 200 of the inventive concept is disposed within a computer, it results in an improvement to the functioning of the computer as compared to computers that operate with a conventional storage device 200. For example, a computer including the storage device 200 and modified to include the elements of the host device 100 uses less power than a conventional computer, has a higher operating performance than a conventional computer, and is more reliable than a conventional computer.
In an embodiment, the model selection module 234 updates a model of the task classifier 241 and a model of the policy classifier 242 together in response to the third model selection request MSR3. As another example, the model selection module 234 may update one of a model of the task classifier 241 and a model of the policy classifier 242 in response to the third model selection request MSR3.
The number of input nodes, the number of hidden nodes, and the number of output nodes are not limited thereto. For example, the number of input nodes may be determined according to the amount or the number of information included in the host environment information HEI or the device environment information DEI input to the classifier CF1. The number of output nodes may be determined according to the number of models, the number of tasks, or the number of policies. The number of input nodes, the number of hidden nodes, and the number of output nodes may be determined in advance upon constructing the neural network.
The first to fourth input nodes IN1 to IN4 form an input layer. The first to fifth hidden nodes HN1 to HN5 form a first hidden layer. The sixth to tenth hidden nodes HN6 to HN10 form a second hidden layer. The first to fourth output nodes ON1 to ON4 form an output layer. The number of hidden layers or the number of hidden nodes of each hidden layer may be determined in advance upon constructing the neural network.
The host environment information HEI or the device environment information DEI may be input to the first to fourth input nodes IN1 to IN4. Environment information of different kinds may be input to different input nodes. The environment information of each input node is transmitted to the first to fifth hidden nodes HN1 to HN5 of the first hidden layer, with weights applied to the environment information thereof.
Information of each of the first to fifth hidden nodes HN1 to HN5 is transmitted to the sixth to tenth hidden nodes HN6 to HN10 of the second hidden layer, with weights applied to the information thereof. Information of each of the sixth to tenth hidden nodes HN6 to HN10 is transmitted to the first to fourth output nodes ON1 to ON4, with weights applied to the information thereof. Information of each of the first to fourth output nodes ON1 to ON4 may indicate a probability or a ratio as described with reference to table 5, table 8, or table 9.
In each of the root node RN and the first to fourth internal nodes IN1 to IN4, comparison may be made with respect to one of pieces of information included in the host environment information HEI or the device environment information DEI. Values may be respectively transmitted to a plurality of branches connected with each node according to the comparison result. The comparison and the transmission of values may be performed with respect to all internal nodes. Values transmitted to the first to sixth leaf nodes LN1 to LN6 may indicate a probability or a ratio as described with reference to table 5, table 8, or table 9.
As another example, one of a plurality of branches connected to each node may be selected according to the comparison result. In the case where another internal node has been connected to the selected branch, comparison may be made with respect to other information of the pieces of information included in the host environment information HEI or the device environment information DEI at the internal node. In the case where a leaf node has been connected to the selected branch, a value of the leaf value may be obtained as a classification result CR.
The number of the internal nodes IN1 to IN4, and the number of the leaf nodes LN1 to LN6 are not limited thereto. For example, the number of the internal nodes IN1 to IN4 may be determined according to the amount or the number of information included in the host environment information HEI or the device environment information DEI input to the classifier CF2. The number of leaf nodes may be determined according to the number of models, the number of tasks, or the number of policies.
The processor 110a may include the application module 111 and an information collection module 114a. The application module 111 may execute various applications 112 and processes 113 that use the storage device 200. The information collection module 114a includes the power monitor 115, the user configuration register 116, the input and output monitor 117, the application and process monitor 118, and an evaluator 119.
Compared with the host device 100 of
For example, the evaluator 119 may perform evaluation after the model is selected and a specific time elapses or after the model is selected and a specific number of access operations are performed and may provide the fifth host environment information HEI5. For example, the evaluator 119 may perform evaluation after the model is selected and at least one of a read operation, a write operation, and an erase operation or each of the read operation, the write operation, and the erase operation is performed as much as at least the specific number of times and may provide the fifth host environment information HEI5.
The model classifier 160 may further receive the fifth host environment information HEI5 as the host environment information HEI. The model classifier 160 may further perform classification on the basis of the fifth host environment information HEI5. On the basis of the fifth host environment information HEI5, the model classifier 160 may maintain a current model or may request replacement of the current model with a new model.
For example, the model classifier 160 may apply the fifth host environment information HEI5 after the model is selected and a specific time elapses or after the model is selected and a specific number of access operations are performed. For example, the model classifier 160 may apply the fifth host environment information HEI5 after the model is selected and at least one of a read operation, a write operation, and an erase operation or each of the read operation, the write operation, and the erase operation is performed as much as at least the specific number of times.
In operation S320, the model classifier 160 of the host device 100 sends the model selection request MSR indicating the selected model to the storage device 200. In operation S330, the model selection module 234 of the storage device 200 loads model data MOD of the selected model on the model execution processor 240.
If the model is completely selected (changed), in operation S340, the host device 100 may send access requests to the storage device 200. In operation S350, the model execution processor 240 of the storage device 200 executes the access requests depending on the selected model.
After access operations are performed in the storage device 200 depending on the access requests, operation S350 is performed. For example, operation S360 may be performed after the model is selected and a specific time elapses or after at least one of a read operation, a write operation, and an erase operation or each of the read operation, the write operation, and the erase operation is performed as much as at least the specific number of times.
In operation S360, the host device 100 evaluates performance and reselects a model. For example, the model classifier 160 may again perform selection (or classification) of a model based on the fifth host environment information HEI5 being the evaluation result of the evaluator 119. The model classifier 160 sends the model selection request MSR indicating the reselected model to the storage device 200 depending to the result of selection (or classification) thus again performed.
For example, if the reselected model is the same as a previous model, the model classifier 160 does not send the model selection request MSR to the storage device 200. If the reselected model is different from the previous model, the model classifier 160 sends the model selection request MSR to the storage device 200. The storage device 200 may replace a model depending on the model selection request MSR.
For example, the evaluation and the selection based on the evaluation may be consistently performed. For example, selection may be again performed on the basis of the fifth host environment information HEI5 of the evaluator 119 after the model is reselected and a specific time elapses or after the model is reselected and at least one of a read operation, a write operation, and an erase operation or each of the read operation, the write operation, and the erase operation is performed as much as at least the specific number of times.
For example, if the result of evaluating performance is lower than a threshold value, the model classifier 160 selects a model different from a previous model. If the result of evaluating performance is not smaller than the threshold value, the model classifier 160 maintains the previous model. The threshold value may vary depending on settings of the user configuration register 116. The threshold may be set by a user or an external device.
Unlike the computing system 10 described with reference to
The storage device 200a may include the model execution processor 240, the nonvolatile memory device 280, the device information collection module 231a, and a model classifier 290. Compared with the storage device 200 of
The device information collection module 231a collects environment information of the storage device 200a and provides the collected environment information to the model execution processor 240 as the device environment information DEI. The model classifier 290 selects a model in which the storage device 200a processes the access request AR of the host device 100b, based on the device environment information DEI.
For example, the model classifier 290 may select one of models that are in advance determined by using a machine learning-based classifier with respect to the device environment information DEI. The model classifier 290 sends the model data MOD of the selected model to the model execution processor 240.
Compared with the computing system 10 of
Compared with the storage device 200 of
The controller 230a may include the device information collection module 231 and the model downloader 236. The device information collection module 231 may collect environment information of the storage device 200a and may provide a part or all of the collected environment information to the model classifier 290 as first device environment information DEI1 The device information collection module 231 may provide a part of all of the collected environment information to the model execution processor 240 as second device environment information DEI2.
For example, the first device environment information DEI1 and the second device environment information DEI2 may have overlapping information. For example, the first device environment information DEI1 and the second device environment information DEI2 do not coincide with each other. As another example, the first device environment information DEI1 and the second device environment information DEI2 coincide with each other.
When the model data MOD is not stored in the nonvolatile memory device 280, the model downloader 236 may request the model data MOD from the host device 100b. The model data MOD may be stored in the nonvolatile memory device 280 through the third buffer memory 260 as a part of data.
In an embodiment, the model classifier 290 is created using machine learning. The model classifier 290 may receive the device environment information DEI and may classify (or select) a model suitable for the storage device 200a based on the device environment information DEI. The model classifier 290 may send the first model request MR1 for requesting the classified models to the memory interface 270.
The model classifier 290 may send the model data MOD received from the memory interface 270 to the model execution processor 240 in response to the first model request MR1. The model classifier 290 may be implemented with a separate semiconductor chip that is separated from the controller 230a. For example, the model classifier 290 may be implemented with a semiconductor chip, which is suitable to perform machine learning-based classification, such as a field programmable gate array (FPGA), a graphics processor (GPU), or a neuromorphic chip.
The host interface 210, the first buffer memory 220, the model execution processor 240, the second buffer memory 250, the third buffer memory 260, the memory interface 270, and the nonvolatile memory device 280 may be configured and operate the same as those described with reference to
As described above, the storage device 200a includes the model classifier 290. The model classifier 290 may classify (or select) a model suitable for the storage device 200a based on the second device environment information DEI. The model execution processor 240 may execute the selected model to schedule the tasks 251 and the policies 252. Accordingly, the storage device 200a optimized to use a pattern of a user based on the machine learning is provided.
In an embodiment, as described with reference to
Compared with the storage device 200a of
For example, the evaluator 237 may evaluate performance such as latencies of access operations. The evaluator 237 may send the evaluation result to the model classifier 290 after including the evaluation result in the first device environment information DEI1 The evaluator 237 may include the evaluation result in the second device environment information DEI2 before being transmitted to the model execution processor 240.
For example, the evaluator 237 may perform an evaluation after the model is selected and a specific time elapses or after the model is selected and a specific number of access operations are performed and include the evaluation result in the device environment information DEI. For example, the evaluator 237 may perform an evaluation after the model is selected and at least one of a read operation, a write operation, and an erase operation or each of the read operation, the write operation, and the erase operation is performed as much as at least the specific number of times and may provide the evaluation result.
The model classifier 290 may further perform classification on the basis of the evaluation result. On the basis of the evaluation result, the model classifier 290 may maintain a current model or may request replacement of the current model with a new model. The model execution processor 240 may schedule the following tasks or policies based on the evaluation result.
For example, in the case where a current model is maintained, the model execution processor 240 may apply the evaluation result. In the case where the current model is changed to a new model, the evaluation result is not associated with the new model. Accordingly, in the case where the current model is changed to the new model, the model execution processor 240 does not apply the evaluation result.
For example, the model classifier 290 may apply the evaluation result after the model is selected and a specific time elapses or after the model is selected and a specific number of access operations are performed. For example, the model classifier 290 may apply the evaluation result after the model is selected and at least one of a read operation, a write operation, and an erase operation or each of the read operation, the write operation, and the erase operation is performed as much as at least the specific number of times.
As described above, the storage device 200b may select and execute a model without intervention of the host device 100b. Also, the storage device 200b may evaluate a model without intervention of the host device 100b and may replace or maintain a model depending on the evaluation result. Accordingly, the storage device 200b optimized to a use pattern of a user based on the machine learning is provided.
According to at least one embodiment of the inventive concept, a method in which a controller accesses a nonvolatile memory device is adjusted according to machine learning. Accordingly, a storage device that provides optimum operating performance to each user, a computing system including the storage device, and an operating method of the storage device are provided.
While the inventive concept has been described with reference to exemplary embodiments thereof, it will be apparent to those of ordinary skill in the art that various changes and modifications may be made thereto without departing from the spirit and scope of the inventive concept.
Number | Date | Country | Kind |
---|---|---|---|
10-2017-0135435 | Oct 2017 | KR | national |