This application claims benefit of priority to Korean Patent Application No. 10-2022-0068788 filed on Jun. 7, 2022, in the Korean Intellectual Attribute Office, the disclosure of which is incorporated herein by reference in its entirety.
Example embodiments of the present disclosure relate to a storage device including a nonvolatile memory, and an electronic system including the storage device.
Flash memory devices may be widely used as audio and image data storage media for information devices such as a computer, a smartphone, a PDA, a digital camera, a camcorder, a voice recorder, an MP3 player, and a portable computer (handheld PC). A representative example of a flash memory-based mass storage device may be a solid state drive (SSD).
A storage device may support a multi-stream function of classifying and storing data according to a stream identifier (ID) provided together with a write request from a host.
An example embodiment of the present disclosure is to provide a storage device supporting a multi-stream function, in which different I/O performance and data recovery performance may be provided for each data according to a stream ID allocated to the data.
An example embodiment of the present disclosure is to provide an electronic system in which a host may allocate a stream ID to the data depending on data attributes, and the storage device may apply different storing schemes depending on a stream ID allocated to the data, thereby providing different I/O performance and data recovery performance depending on the data attributes.
According to an example embodiment of the present disclosure, a storage device includes a memory device including a plurality of first nonvolatile memories and a plurality of second nonvolatile memories; and a controller configured to sort pieces of data having different attributes received from a host, and store the pieces of data that were sorted in different first or second memory arrays of the memory device, where the controller is configured to store data in the first memory array by a mirroring scheme using the plurality of first nonvolatile memories, and is configured to store data in the second memory array by a striping scheme using the plurality of second nonvolatile memories.
According to an example embodiment of the present disclosure, a storage device includes a memory device including a plurality of nonvolatile memories; and a controller configured to control the memory device, where the controller is configured to group nonvolatile memories having a same bit density among the plurality of nonvolatile memories into respective memory arrays, control the respective memory arrays to store data using different storing schemes, determine a target memory array among the respective memory arrays for storing pieces of data based on stream IDs of the pieces of data received along with a write request from a host, and store the pieces of data in the nonvolatile memories included in the target memory array using one of the different storing schemes applied to the target memory array.
According to an example embodiment of the present disclosure, an electronic system includes a host; and a storage device comprising a first memory array for storing data using a mirroring scheme, and a second memory array for storing data using a striping scheme, where the host is configured to allocate respective stream IDs to pieces of data having different attributes, map the respective stream IDs to a storage scheme comprising one of the mirror scheme or the striping scheme, and provide mapping information between the respective stream IDs and the storage scheme to the storage device, and where the storage device is configured to determine a mapping relationship between the respective stream IDs and the first memory array or the second memory array based on the mapping information, sort the pieces of data received from the host based on the respective stream IDs allocated to the pieces of data, and store the pieces of data that were sorted in the first memory array or the second memory array.
The above and other aspects, features, and advantages of the present disclosure will be more clearly understood from the following detailed description, taken in combination with the accompanying drawings, in which:
Hereinafter, embodiments of the present disclosure will be described as follows with reference to the accompanying drawings.
The electronic system 10 may include a host 100 and a storage device 200. Also, the storage device 200 may include a controller 210 and a memory device 220.
The host 100 may include an electronic device, such as, for example, portable electronic devices such as a mobile phone, an MP3 player, or a laptop computer, or electronic devices such as a desktop computer, a game machine, a TV, a projector, and the like. The host 100 may include at least one operating system (OS). The operating system may overall or generally manage and control functions and operations of the host 100.
The storage device 200 may include storage media for storing data in response to a request from the host 100. As an example, the storage device 200 may include a solid state drive (SSD), an embedded memory, and/or a removable external memory. When the storage device 200 is an SSD, the storage device 200 may be a device conforming to a nonvolatile memory express (NVMe) standard. When the storage device 200 is an embedded memory or an external memory, the storage device 200 may be a device conforming to a universal flash storage (UFS) or an embedded multi-media card (eMMC) standard. The host 100 and the storage device 200 may generate a packet according to an adopted standard protocol and may transmit the packet.
The memory device 220 may maintain stored data even when power is not supplied. The memory device 220 may store data provided from the host 100 through a program operation, and may output data stored in the memory device 220 through a read operation. The memory device 220 may include a plurality of nonvolatile memories NVM1, NVM2 (collectively, NVM). The nonvolatile memories NVM may include a plurality of memory blocks. The memory block may include a plurality of pages each including a plurality of memory cells. The memory cells may be programmed or may read in units of pages, and may be erased in units of memory blocks.
When the memory device 220 includes a flash memory, the flash memory may include a 2D NAND memory or a 3D (or vertical) NAND (VNAND) memory. As another example, the storage device 200 may include other various types of nonvolatile memories. For example, the storage device 200 may include a magnetic RAM (MRAM), a spin-transfer torque MRAM (MRAM), a conductive bridging RAM (CBRAM), a ferroelectric RAM (FeRAM), a phase RAM (PRAM), a resistive memory (Resistive RAM) and various other types of memory.
The controller 210 may control the memory device 220 in response to a request from the host 100. For example, the controller 210 may provide data read from the memory device 220 to the host 100, and may store the data provided from the host 100 in the memory device 220. For this operation, the controller 210 may control operations such as a read operation, a program operation, and an erase operation of the memory device 220.
The controller 210 may perform a foreground operation performed in response to a request from the host 100 and also a background operation for managing the memory device 220. For example, the memory device 220 may have attributes in which a unit of a program operation may be different from a unit of an erase operation and overwriting may not be supported. Due to these attributes, when data stored in the memory device 220 is updated, the amount of invalid data stored in the memory device 220 may increase, and valid data may be distributed. The controller 210 may collect valid data distributed in the memory device 220 and may perform a garbage collection operation to secure free space in the memory device 220.
Since the amount of processing resources of the controller 210 is limited, when the garbage collection operation is performed too frequently, an overhead in which data input/output performance of the storage device 200 is deteriorated may occur. When data having similar attributes may be collected and stored in the memory device 220, effective data may be less distributed even when data is updated, and overhead due to garbage collection may be alleviated.
The storage device 200 may support a multi-stream function to alleviate overhead. The multi-stream function may refer to a function in which the storage device 200 may sort pieces of data based on attributes, and store pieces of data having different attributes in different memory regions. For example, the storage device 200 may sort the pieces of data received from the host 100 based on stream IDs (identifiers) allocated thereto, and store pieces of data having different stream IDs in different memory regions. When the host 100 provides a write request for data to the storage device 200 such that the storage device 200 may classify and store pieces of data according to attributes, the host 100 may also provide a stream ID according to attributes of the pieces of data.
Meanwhile, the host 100 may request the storage device 200 to exhibit different input/output performance and data recovery performance according to the attributes of pieces of data. For example, the host 100 may request to swiftly obtain system data such as log data and metadata from the storage device 200 (e.g., comparatively more quickly than user data). Also, when there is an error in the system data stored in the storage device 200, the host 100 may request that the system data be swiftly recovered. The host 100 may request the storage device 200 to provide a high storage capacity for user data. It may be difficult to meet such a request of the host 100 by simply storing the pieces of data in different spaces according to the attributes of the pieces of data.
According to an example embodiment, the storage device 200 may configure the plurality of memory arrays 221 and 222 by grouping the plurality of nonvolatile memories NVM. The storage device 200 may store the pieces of data in the plurality of memory arrays 221 and 222 based on a stream ID allocated to data received from the host 100. The storage device 200 may store the pieces of data in the plurality of memory arrays 221 and 222 using different storing schemes. The terms “first,” “second,” etc. may be used herein merely to distinguish one element from another.
For example, the storage device 200 may mirror data between the first nonvolatile memories NVM1 included in the first memory array 221 and may stripe data in the second nonvolatile memories NVM2 included in the second memory array 222. The first memory array 221 may input/output data more swiftly than the second memory array 222, and when an error occurs in the data, the first memory array 221 may swiftly restore the data. The second memory array 222 may store a large amount of pieces of data more efficiently than the first memory array 221.
According to an example embodiment, the host 100 may allocate different stream IDs to the pieces of data according to data attributes, and the storage device 200 may store the pieces of data to which different stream IDs are allocated in different schemes. For example, the host 100 may allocate different stream IDs to system data and user data. The storage device 200 may store system data in the first memory array 221 and may store user data in the second memory array 222 based on the stream ID.
System data may be mirrored in each of the first nonvolatile memories NVM1, and user data may be striped in the second nonvolatile memories NVM2. The storage device 200 may improve input/output performance and data recovery performance of system data, and may efficiently use a storage space of user data. In other words, the storage device may provide differentiated I/O performance and data recovery performance depending on data attributes.
Hereinafter, an example in which the storage device 200 may configure the plurality of memory arrays 221 and 222 will be described in greater detail with reference to
Referring to
The memory device 220 may include a plurality of nonvolatile memories. As described with reference to
Referring to
Each of the nonvolatile memories NVM11-NVM26 may be connected to a channel through a respective connection or way. For example, the first nonvolatile memories NVM11 and NVM12 may be connected to the first and second channels CH1 and CH2 through the ways W11-W24, and the second nonvolatile memories NVM21-The NVM26 may be connected to the third to eighth channels CH3-CH8 through the ways W31-W84.
In an example embodiment, each of the nonvolatile memories NVM11-NVM26 may be implemented as an arbitrary memory unit which may operate in response to an individual command from the controller 210. For example, each of the nonvolatile memories NVM11 to NVM26 may be implemented as a chip or a die, but an example embodiment thereof is not limited thereto. Also, the number of channels included in the storage device 200 and the number of nonvolatile memories connected to each channel are not limited to any particular example.
The controller 210 may transmit signals to and receive signals from the memory device 220 through a plurality of channels CH1-CH8. For example, the controller 210 may transmit commands, addresses, and data to the memory device 220 or may receive data from the memory device 220 through the channels CH1-CH8.
The controller 210 may select one of the nonvolatile memories connected to the corresponding channel through each channel, and transmit signals to and receive signals from the selected nonvolatile memory. The controller 210 may transmit a command, an address, and data to the selected nonvolatile memory or may receive data from the selected nonvolatile memory through a channel.
The controller 210 may transmit signals to and receive signals from the memory device 120 in parallel through different channels. For example, the controller 210 may, while transmitting a command to the memory device 220 through the first channel CH1, transmit another command to the memory device 220 through the second channel CH2. Also, the controller 210 may, while receiving data from the memory device 220 through the first channel CH1, receive other data from the memory device 220 through the second channel CH2.
Each of the nonvolatile memories connected to the controller 210 through the same channel may perform an internal operation in parallel. For example, the controller 210 may transmit a command and an address to the nonvolatile memories NVM1 through the first channel CH1 in sequence. When a command and an address are transmitted to the nonvolatile memories NVM11, each of the nonvolatile memories NVM1 may perform an operation according to the command in parallel.
Hereinafter, nonvolatile memories will be described in more detail with reference to
The control logic circuit 320 may control overall various operations in the nonvolatile memory 300. The control logic circuit 320 may output various control signals in response to a command CMD and/or an address ADDR from the memory interface circuit 310. For example, the control logic circuit 320 may output a voltage control signal CTRL vol, a row address X-ADDR, and a column address Y-ADDR.
The memory cell array 330 may include a plurality of memory blocks BLK1-BLKz (z is a positive integer), and each of the plurality of memory blocks BLK1-BLKz may include a plurality of memory cells. The memory cell array 330 may be connected to the page buffer unit 340 through the bit lines BL, and may be connected to the row decoder 360 through the word lines WL, the string select lines SSL, and the ground select lines GSL.
In an example embodiment, the memory cell array 330 may include a 3D memory cell array, and the 3D memory cell array may include a plurality of NAND strings. Each NAND string may include memory cells connected to word lines stacked vertically on the substrate, respectively. U.S. Pat. Nos. 7,679,133, 8,553,466, 8,654,587, 8,559,235, and U.S. Patent Application Publication No. 2011/0233648 are incorporated herein by reference in their entirety. In an example embodiment, the memory cell array 330 may include a 2D memory cell array, and the 2D memory cell array may include a plurality of NAND strings disposed in row and column directions.
The page buffer 340 may include a plurality of page buffers PB1 to PBn (where n is an integer equal to or greater than 3), and the plurality of page buffers PB1 to PBn may be connected to the memory cells, respectively, through a plurality of bit lines BL. The page buffer 340 may select at least one bit line among the bit lines BL in response to the column address Y-ADDR. The page buffer 340 may operate as a write driver or a sense amplifier depending on an operation mode. For example, during a program operation, the page buffer 340 may apply a bit line voltage corresponding to data to be programmed to a selected bit line. During a read operation, the page buffer 340 may sense data stored in the memory cell by sensing a current or voltage of the selected bit line.
The voltage generator 350 may generate various types of voltages for performing program, read, and erase operations based on the voltage control signal CTRL vol. For example, the voltage generator 350 may generate a program voltage, a read voltage, a program verify voltage, an erase voltage, or the like, as the word line voltage VWL.
The row decoder 360 may, in response to the row address X-ADDR, select one of the plurality of word lines WL and may select one of the plurality of string selection lines SSL. For example, during a program operation, the row decoder 360 may apply a program voltage and a program verify voltage to a selected word line, and during a read operation, the row decoder 360 may apply a read voltage to the selected word line.
The memory block BLKi illustrated in
Referring to
The string select transistor SST may be connected to the corresponding string select lines SSL1, SSL2, and SSL3. The plurality of memory cells MC1, MC2, . . . , MC8 may be connected to corresponding gate lines GTL1, GTL2, GTL8, respectively. The gate lines GTL1, GTL2, GTL8 may be word lines, and a portion or subset of the gate lines GTL1, GTL2, GTL8 may be dummy word lines. The ground select transistor GST may be connected to the corresponding ground select lines GSL1, GSL2, or GSL3. The string select transistor SST may be connected to the corresponding bit lines BL1, BL2, or BL3, and the ground select transistor GST may be connected to the common source line CSL.
The word lines (e.g., WL1) on the same level may be connected in common, and the ground selection lines GSL1, GSL2, and GSL3 and the string selection lines SSL1, SSL2, and SSL3 may be isolated from each other.
In example embodiments, the nonvolatile memories NVM1 and NVM2 may have various bit densities. A bit density of the nonvolatile memory may refer to the number of pieces of data bits which each of the memory cells included in the nonvolatile memory may store.
Referring to
When the memory cell is a single level cell (SLC) storing 1-bit data, the memory cell may have a threshold voltage corresponding to one of the first program state P1 and the second program state P2. The read voltage Val may be a voltage for distinguishing the first program state P1 from the second program state P2. Since the memory cell having the first program state P1 has a threshold voltage lower than the read voltage Val, the memory cell may be read as an on-cell. Since the memory cell having the second program state P2 has a threshold voltage higher than the read voltage Val, the memory cell may be read as an off-cell.
When the memory cell is a multiple level cell (MLC) storing 2-bit data, the memory cell may be a threshold voltage corresponding to one of the first to fourth program states P1 to P4. The first to third read voltages Vbl to Vb3 may be read voltages for distinguishing the first to fourth program states P1 to P4 from each other.
When the memory cell is a triple level cell (TLC) storing 3-bit data, the memory cell may have a threshold voltage corresponding to one of the first to eighth program states P1 to P8. The first to seventh read voltages Vc1 to Vc7 may be read voltages for distinguishing the first to eighth program states P1 to P8 from each other.
When the memory cell is a quadruple level cell (QLC) storing 4-bit data, the memory cell may have one of first to sixteenth program states P1 to P16. The first to fifteenth read voltages Vd1 to Vd15 may be read voltages for distinguishing the first to sixteenth program states P1 to P16 from each other.
Among SLC, MLC, TLC, and QLC, SLC may have the lowest bit density and QLC may have the highest bit density. A memory cell having a higher bit density may store a large amount of pieces of data, but the number of program states which may be formed in the corresponding memory cell and the number of read voltages for distinguishing the program states may increase. Accordingly, each program state may need to be precisely programmed in a memory cell having a high bit density, and when the threshold voltage distribution is deteriorated, it may be highly likely that data may be read erroneously. That is, the higher the bit density of the memory cell, the lower the reliability of pieces of data stored in the memory cell may be.
According to an example embodiment, the first nonvolatile memories NVM1 may have a bit density relatively lower than that of the second nonvolatile memories NVM2. The storage device 200 may increase data stability by mirroring data to the first nonvolatile memories NVM1 including memory cells having relatively high reliability. Also, the storage device 200 may more efficiently use the storage space by striping the data to the second nonvolatile memories NVM2 including memory cells for storing data having a relatively high density.
Referring to
As described with reference to
According to an example embodiment, the first nonvolatile memories NVM1 may include memory cells having a relatively low bit density, and the second nonvolatile memories NVM2 may include memory cells having a relatively high bit density. For example, the first nonvolatile memories NVM1 may be implemented as SLCs, and the second nonvolatile memories NVM2 may be implemented as MLCs, TLCs, or QLCs.
The controller 210 may include a host interface 211, a memory interface 212, and a central processing unit (CPU) 213. Also, the controller 210 may further include a packet manager 216, a buffer memory 217, an error correction code (ECC) engine 218, and an advanced encryption standard (AES) engine 219. The CPU 213 may drive the data allocator 214 and a flash translation layer (FTL) 215, and the controller 210 may further include a working memory (not illustrated) into which the data allocator 214 and the FTL 215 are loaded.
The host interface 211 may transmit and receive packets to and from the host 100. A packet transmitted from the host 100 to the host interface 211 may include a command or data to be written to the memory device 220, and a packet transmitted from the host interface 211 to the host 100 may include a response to a command or data read from the memory device 220.
The memory interface 212 may transmit data to be written to the memory device 220 to the memory device 220 or may receive data read from the memory device 220. The memory interface 212 may be implemented to comply with a standard protocol such as a toggle or an open NAND flash interface (ONFI).
The data allocator 214 may divide or sort pieces of data allocated to different stream IDs based on the stream ID allocated to the pieces of data received from the host 100 and store the divided or sorted pieces of data to the first memory array 221 or the second memory array 222.
The data allocator 214 may control the first nonvolatile memories NVM1 included in the first memory array 221 to mirror data. For example, data stored in the first memory array 221 may be copied to the first nonvolatile memories NVM1 included in the first memory array 221. Since the bit density of memory cells of the first nonvolatile memories NVM1 is relatively low, an error may infrequently occur in the data copy of the first nonvolatile memories NVM1. Also, even when an error occurs in one of the two or more data copies stored in the first memory array 221, data may be recovered using the other copy.
Also, the data allocator 214 may control the second nonvolatile memories NVM2 included in the second memory array 222 to stripe data. For example, data allocated to the second memory array 222 may be striped through two or more of the second nonvolatile memories NVM2. The striping of data may indicate that logically contiguous data chunks may be stored across a plurality of nonvolatile memories by a round-robin method, for example, with each chunk in a respective nonvolatile memory.
Meanwhile, the data allocator 214 may generate a parity chunk by performing a parity operation on a predetermined number of data chunks, and the parity chunk may also be striped together with the data chunks. When there is an error in one of the data chunk, the storage device 200 may restore the data chunk using the other data chunks and parity chunks striped together with the data chunk. The storage device 200 may efficiently store a large amount of pieces of data in the storage space of the second nonvolatile memories NVM2 and may enable data recovery.
The FTL 215 may perform various functions such as address mapping, wear-leveling, and garbage collection. The address mapping operation may be an operation of changing a logical address received from the host 100 into a physical address used to actually store data in the memory device 220. The wear-leveling may be a technique for preventing excessive degradation of a specific block by ensuring that blocks in the memory device 220 are used uniformly, and may be implemented through a firmware technique for balancing erase counts of physical blocks, for example. The garbage collection may be a technique for securing usable capacity in the memory device 220 by copying valid data of a block to a new block and erasing an existing block.
The packet manager 216 may generate a packet according to the protocol of the interface negotiated with the host 100 or may parse various information from the packet received from the host 100.
The buffer memory 217 may temporarily store data to be written to the memory device 220 or data to be read from the memory device 220. The buffer memory 217 may have an element provided in the controller 210, or may be disposed externally of the controller 210.
The ECC engine 218 may perform an error detection and correction function for data read from the memory device 220 (also referred to as read data). More specifically, the ECC engine 218 may generate parity bits for write data to be written to the memory device 220, and the generated parity bits may be stored in the memory device 220 together with the write data. When reading data from the memory device 220, the ECC engine 218 may correct an error in the read data using parity bits read from the memory device 220 together with the read data, and may output the error-corrected read data.
The AES engine 219 may perform at least one of an encryption operation and a decryption operation for data input to the controller 210 using a symmetric-key algorithm.
Hereinafter, techniques for the storage device 200 to store data in a memory array will be described in greater detail with reference to
Referring to
The controller 210 may control data to be mirrored between the nonvolatile memories NVM11 and NVM12 included in the first memory array 221. For example, when the controller 210 receives pieces of data from the host 100, the controller 210 may store the pieces of data in the nonvolatile memories NVM11 and NVM12, respectively.
For example, the controller 210 may divide the pieces of data received from the host 100 into data units DATAa-DATAf of a predetermined size, and may add a cyclic redundancy check (CRC) value CRCV to each data unit, thereby generating data chunks DCHUNK. The CRC value CRCV may be generated by the ECC engine 218 of the controller 210 described with reference to
Referring to
The controller 210 may check whether there is an error in the data chunk by performing a CRC operation on one of the data chunks. In the example in
In example embodiments, the nonvolatile memories NVM11 and NVM12 may include SLCs. Accordingly, an error may infrequently occur in data chunks stored in the nonvolatile memories NVM11 and NVM12.
The nonvolatile memories NVM11 and NVM12 may perform a read operation independently of each other, may be connected to different channels CH1 and CH2 and may communicate with the controller 210 in parallel. The controller 210 may obtain the first data chunk DCHUNK1 from the nonvolatile memory NVM11 and may simultaneously obtain the second data chunk DCHUNK2 from the nonvolatile memory NVM12. Accordingly, when there is an error in the first data chunk DCHUNK1, the controller 210 may provide and restore data using the previously obtained second data chunk DCHUNK2.
That is, an error may rarely occur in the data chunks stored in the first memory array 221, and even when an error occurs in some data chunks, the storage device 200 may swiftly perform data recovery.
Referring to
The controller 210 may control data to be striped between the nonvolatile memories NVM21-NVM26 included in the second memory array 222.
For example, the controller 210 may generate a parity chunk using a predetermined number of data chunks, and may store the data chunks and the parity chunk across the nonvolatile memories NVM21-NVM26, for example, with each data chunk DCHUNK or parity chunk PCHUNK stored in a respective one of the nonvolatile memories NVM21-NVM26. A unit of pieces of data which may be stored across the nonvolatile memories NVM21 to NVM26 be referred to as a stripe.
The parity chunk PCHUNK may be generated based on a parity operation using a predetermined number of data chunks DCHUNK1-DCHUNK5. For example, the ECC engine 218 described with reference to
Meanwhile, when the controller 210 stores data using the striping scheme, a parity operation may be required, and accordingly, data input/output performance may be slightly lower than that of storing data using the mirroring scheme. However, when the controller 210 stores data using the striping scheme, a larger amount of data chunks may be stored in a storage space having the same capacity as compared to when the data is stored using the mirroring scheme.
Referring to
The controller 210 may check whether there is an error in the third data chunk DCHUNK3 by performing a CRC operation on the third data chunk DCHUNK3 including the data DATAa3. When there is an error in the third data chunk DCHUNK3, the controller 210 may recover the third data chunk DCHUNK3 by performing a parity operation using the other data chunks and the other parity chunk PCHUNK in the loaded stripe. For example, when a parity chunk PCHUNK is generated by performing an XOR operation on the data chunks DCHUNK1-DCHUNK5, the data chunk may be recovered by performing an XOR operation on the other data chunks and the other parity chunk PCHUNK.
The controller 210 may provide data to the host 100 using the recovered third data chunk DCHUNK3, and may restore the second memory array 222 storing the data including an error by storing the stripe including the recovered third data chunk DCHUNK3 across the nonvolatile memories NVM21-NVM26.
The nonvolatile memories NVM21-NVM26 may perform a read operation independently of each other, may be connected to different channels CH3-CH8 and may communicate with the controller 210 in parallel. The controller 210 may obtain the entire stripe in parallel from the nonvolatile memories NVM21 to NVM26 even when the data read requested from the host 100 is included in a portion of data chunks among the data chunks included in the stripe. When there is an error in some data chunks, the controller 210 may swiftly respond to the read request of the host 100 by recovering data using the other data chunks and parity chunks obtained in advance.
In example embodiments, the nonvolatile memories NVM21-NVM26 may include memory cells such as MLC, TLC, and QLC. Accordingly, when an error occurs in a portion of data chunks stored in the second memory array 222, the storage device 200 may recover the data chunks using the parity chunks, and may efficiently use the storage space of the second memory array 222.
Meanwhile, an example embodiment in which five data chunks and one parity chunk are striped in six nonvolatile memories has been described with reference to
As described with reference to
According to an example embodiment, the storage device 200 may store system data in the first memory array 221 such that high stability and high input/output performance of the system data may be ensured, and when an error occurs in the system data, the system data may be swiftly recovered. Also, the storage device 200 may efficiently store a large amount of user data in the second memory array 222.
Hereinafter, a method of providing differentiated performance depending on data attributes by dividing or sorting pieces of data according to the stream IDs allocated to the pieces of data and storing the divided or sorted data in a plurality of memory arrays will be described in greater detail with reference to
Referring to
The controller 210 may store a mapping table indicating a mapping relationship between memory arrays and stream IDs. When receiving the stream ID allocated to the data along with a write request for data from the host 100, the controller 210 may determine a target memory array for storing the data by referring to the mapping table. The controller 210 may store the data using a storing scheme applied to the target memory array. For example, the controller 210 may store data to be stored in the first memory array 221 using a mirroring scheme, and may store data to be stored in the second memory array 222 using a striping scheme.
According to an example embodiment, a relationship between a stream ID and a storing scheme may be shared between the host 100 and the storage device 200. The host 100 may allocate a stream ID to the pieces of data depending on the data attribute, and may provide the stream ID together with a write request for the pieces of data to the storage device 200, such that the host 100 may request the storage device 200 to apply different storing schemes to the pieces of data depending on the data attributes.
The host 100 may allocate different stream IDs to pieces of data having different attributes. In the example in
Referring to
Hereinafter, operations of the host 100 and the storage device 200 which may allow the storage device 200 to apply different storing schemes depending on data attributes will be described with reference to
Referring to
In operation S14, the host 100 may provide a stream ID together when providing a write request. For example, the host 100 may provide a write request, data to be written, a logical address LBA of the data, and a stream ID determined depending on attributes of the data.
In operation S15, the storage device 200 may store data in the target memory array corresponding to the stream ID by referring to the stream ID and the mapping information received along with the write request from the host 100.
Referring to
In operation S22, the storage device 100 may provide mapping information between the stream ID and the storing scheme to the host. In operation S23, the host 100 may allocate different stream IDs depending on data types with reference to the provided mapping information.
In operation S24, the host 100 may provide a stream ID together when providing a write request to the storage device 200. Operation S24 may be performed in substantially the same manner as operation S14 described with reference to
According to an example embodiment, the storage device 200 may divide or sort pieces of data having different stream IDs and may store the divided or sorted pieces of data using different storing schemes. The storage device 200 may, by allowing the host 100 to allocate the stream IDs to the pieces of data depending on data attributes, provide differentiated data input/output performance, data stability, data recovery performance, and the like, depending on data attributes.
Referring to
The main processor 1100 may control overall operations of the system 1000, more specifically, operations of the other elements included in the system 1000. The main processor 1100 may be implemented as a general-purpose processor, a dedicated processor, or an application processor.
The main processor 1100 may include one or more CPU cores 1110, and may further include a controller 1120 for controlling the memories 1200a, 1200b and/or the storage devices 1300a and 1300b. In example embodiments, the main processor 1100 may further include an accelerator 1130 which may be a dedicated circuit for high-speed data operation such as artificial intelligence (AI) data operation. The accelerator 1130 may include a graphics processing unit (GPU), a neural processing unit (NPU), and/or a data processing unit (DPU), and may be implemented as a chip physically independent from the other elements of the main processor 1100.
The memories 1200a and 1200b may be used as a main memory device of the system 1000, and may include a volatile memory such as SRAM and/or DRAM, or may include a nonvolatile memory such as a flash memory, PRAM and/or RRAM. The memories 1200a and 1200b may be implemented in the same package with the main processor 1100.
The storage devices 1300a and 1300b may function as nonvolatile storage devices which may store data regardless of whether power is supplied or not, and may have a relatively large storage capacity as compared to the memories 1200a and 1200b. The storage devices 1300a and 1300b may include storage controllers 1310a and 1310b and nonvolatile memory NVM 1320a and 1320b for storing data under control of the storage controllers 1310a and 1310b. The nonvolatile memories 1320a and 1320b may include a flash memory having a two-dimensional (2D) structure or a three-dimensional (3D) Vertical NAND (V-NAND) structure, or may include different types of nonvolatile memories such as PRAM and/or RRAM.
The storage devices 1300a and 1300b may be included in the system 1000 in a state of being physically isolated from the main processor 1100, or may be implemented in the same package with the main processor 1100. Also, the storage devices 1300a and 1300b may have the same shape as that of or may otherwise be implemented as a solid state device (SSD) or a memory card, such that storage devices 1300a and 1300b may be coupled to the other elements of the system 1000 through an interface such as a connecting interface 1480 to be described later. Such storage devices 1300a and 1300b may be devices to which standard protocols such as universal flash storage (UFS), embedded multi-media card (eMMC), or nonvolatile memory express (NVMe) are applied, but an example embodiment thereof is not limited thereto.
The image capturing device 1410 may obtain a still image or a video, and may be implemented as a camera, a camcorder, and/or a webcam.
The user input device 1420 may receive various types of pieces of data input from a user of the system 1000, and may be implemented as a touch pad, a keypad, a keyboard, a mouse and/or a microphone.
The sensor 1430 may sense various types of physical quantities which may be obtained from an external entity of the system 1000, and may convert the sensed physical quantities into electrical signals. Such a sensor 1430 may be implemented as a temperature sensor, a pressure sensor, an illuminance sensor, a position sensor, an acceleration sensor, a biosensor and/or a gyroscope sensor.
The communication device 1440 may transmit and receive signals between other devices outside the system 1000 in accordance with various communication protocols. Such a communication device 1440 may be implemented as an antenna, a transceiver, and/or a modem.
The display 1450 and the speaker 1460 may function as output devices for outputting visual information and auditory information to the user of the system 1000, respectively.
The power supplying device 1470 may appropriately convert power supplied from a battery (not illustrated) and/or an external power source embedded in the system 1000 and may supply the power to each element of the system 1000.
The connecting interface 1480 may provide a connection between the system 1000 and an external device connected to the system 1000 and exchanging data with the system 1000. The connecting interface 1480 may be implemented as various interface methods such as, for example, advanced technology attachment (ATA), serial ATA (SATA), external SATA (e-SATA), small computer small interface (SCSI), serial attached SCSI (SAS), peripheral component interconnection (PCI), PCI express (PCIe), NVMe (NVM express), IEEE 1394, universal serial bus (USB), secure digital (SD) card, multi-media card (MMC), embedded multi-media card (eMMC), universal Flash Storage (UFS), embedded universal Flash Storage (eUFS), compact flash (CF) card interface, and the like.
Data generated by the system 1000 may have various attributes. According to an example embodiment, each of the storage devices 1300a and 1300b may store the pieces of data by classifying pieces of data depending on the attributes thereof. The main processor 1100 may allocate stream IDs to the pieces of data according to the attributes thereof to be stored in the storage devices 1300a and 1300b, and may provide the stream IDs together with a write request for the pieces of data. The storage devices 1300a and 1300b may divide or sort the pieces of data according to attributes by referring to the stream IDs received along with the write request, and may store the divided or sorted pieces of data in the plurality of memory arrays using different storing schemes. For example, when system data is stored in one or more of the storage devices 1300a and 1300b using a mirroring scheme, stability and recovery performance of system data may improve, and when user data is stored in one or more of the storage devices 1300a and 1300b using a striping scheme, the storage space of the storage device(s) 1300a and 1300b may be used efficiently.
According to the aforementioned example embodiments, the storage device may store the pieces of data in different memory arrays depending on stream IDs allocated to the pieces of data, and may control the different memory arrays to store the pieces of data in different schemes, such that input/output performance and data recovery performance may be differentiated depending on the stream IDs allocated to the pieces of data.
Also, in an electronic system, a host may allocate different stream IDs to pieces of data having different attributes, and the storage device may provide different input/output performance and data recovery performance depending on data attributes.
While the example embodiments have been illustrated and described above, it will be configured as apparent to those skilled in the art that modifications and variations could be made without departing from the scope of the present disclosure as defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10-2022-0068788 | Jun 2022 | KR | national |