Computing systems such as servers, personal computers, tablets, and cellular telephones often utilize a host system that communicates with one or more nonvolatile storage systems. An important feature by which storage systems are often judged is a speed at which a host system is able to write data to, and read data from, the storage system. Improved storage systems are desirable that are able to provide a host system the ability to write data to storage systems, and read data from storage systems, at increased speeds.
The present disclosure is directed to systems and methods for managing parallel access to multiple storage systems. In one aspect, a method is disclosed for managing parallel access to multiple storage systems. The method is performed in a host operatively coupled to at least a first memory system and a second memory system. A controller separates data of a file into a plurality of data chunks. The controller stores a first copy of the plurality of data chunks in the first memory system and stores a second copy of the plurality of data chunks in the second memory system. The controller reads a data chunk of the plurality of data chunks of the file from the first memory system or the second memory system based on a determination of whether the first memory system or the second memory system is able to provide the data chunk to the host system more quickly. The controller may then assemble the data of the file based on the data chunk.
In another aspect, a host system including an interface and a processor is disclosed. The interface is operatively coupled with at least a first memory system and a second memory system. The processor is in communication with the first memory system and the second memory system via the interface. The processor is configured to separate data of a file into a plurality of data chunks. The process is further configured to store a first copy of the plurality of data chunks in the first memory system and store a second copy of the plurality of data chunks in the second memory system. The processor is further configured to read a data chunk of the plurality of data chunks of the file from the first memory system or the second memory system based on a determination of whether the first memory system or the second memory system is able to provide the data chunk to the host system more quickly. The processor may then assemble the data of the file based on the data chunk.
The present disclosure is directed to systems and methods for managing parallel access to multiple storage systems. As explained in more detail below, a host system utilizes at least two memory systems to perform parallel processing. During operation, the host system separates a file into a plurality of data chunks and stores a copy of the plurality of data chunks in each of the memory systems. When the host reads the file from the memory systems, the host system simultaneously reads data chunks of the file from the memory systems to increase performance speed.
In addition to increased performance speed, the disclosed methods and systems provide host systems the ability to perform separate operations with respect to each memory system. For example, a host may read data from one memory system for use in playing a video or music, while simultaneously downloading and storing data to another memory system. Further, the disclosed systems and methods provide advantages in that a controller on the host system may implement the disclosed parallel processing without the addition of hardware. While the disclosed systems and methods for managing parallel access to multiple storage systems may be used with many devices and memory storage systems, it should be appreciated that the disclosed systems and methods are especially advantageous for mobile devices such as cellular phones with an embedded flash memory and a removable memory card.
A memory system suitable for use in implementing aspects of the invention is shown in
The host system 100 of
Either of the memory systems 102a, 102b of
The system controller 118 may be implemented on a single integrated circuit chip, such as an application specific integrated circuit (ASIC) such as shown in
Each die 120 in the flash memory 116 may contain an array of memory cells organized into multiple planes. One of
Although the processor 206 in the system controller 118 controls the operation of the memory chips in each bank 120 to program data, read data, erase and attend to various housekeeping matters, each memory chip also contains some controlling circuitry that executes commands from the controller 118 to perform such functions. Interface circuits 342 are connected to the control and status portion 308 of the system bus 302. Commands from the controller 118 are provided to a state machine 344 that then provides specific control of other circuits in order to execute these commands. Control lines 346-354 connect the state machine 344 with these other circuits as shown in
A NAND architecture of the memory cell arrays 310 and 312 is discussed below, although other architectures, such as NOR, can be used instead. An example NAND array is illustrated by the circuit diagram of
Word lines 438-444 of
A second block 454 is similar, its strings of memory cells being connected to the same global bit lines as the strings in the first block 452 but having a different set of word and control gate lines. The word and control gate lines are driven to their proper operating voltages by the row control circuits 324. If there is more than one plane in the system, such as planes 1 and 2 of
The memory cells may be operated to store two levels of charge so that a single bit of data is stored in each cell. This is typically referred to as a binary or single level cell (SLC) memory. Alternatively, the memory cells may be operated to store more than two detectable levels of charge in each charge storage element or region, thereby to store more than one bit of data in each. This latter configuration is referred to as multi level cell (MLC) memory. Both types of memory cells may be used in a memory, for example binary flash memory may be used for caching data and MLC memory may be used for longer term storage. The charge storage elements of the memory cells are most commonly conductive floating gates but may alternatively be non-conductive dielectric charge trapping material.
As mentioned above, a block of memory cells is the unit of erase, the smallest number of memory cells that are physically erasable together. For increased parallelism, however, the blocks are operated in larger metablock units. One block from each plane is logically linked together to form a metablock. The four blocks 510-516 are shown to form one metablock 518. All of the cells within a metablock are typically erased together. The blocks used to form a metablock need not be restricted to the same relative locations within their respective planes, as is shown in a second metablock 520 made up of blocks 522-528. Although it is usually preferable to extend the metablocks across all of the planes, for high system performance, the memory system can be operated with the ability to dynamically form metablocks of any or all of one, two or three blocks in different planes. This allows the size of the metablock to be more closely matched with the amount of data available for storage in one programming operation.
The individual blocks are in turn divided for operational purposes into pages of memory cells, as illustrated in
Referring again to
During operation, before the host system 100 stores a file in either of the memory systems 102a, 102b, the host system 100 breaks the file into a plurality of data chunks. The host system 100 stores a first copy of the plurality of data chunks of the file in the first memory system 102a and stores a second copy of the plurality of data chunks of the file in the second memory system 102b. When the host system 100 later reads the file from the first and second memory systems 102a, 102b, the host reads the file in data chunks from the first memory system 102a and/or the second memory system 102b based on factors such as which memory system is currently available to provide data to the host, and when both memory systems are available to provide data to the host, which memory system can provide a data chunk to the host more quickly.
The host system 100 receives the data chunks in parallel from the first and second memory systems 102a, 102b such the host system 100 may receive a first data chunk from the first memory system 102a while simultaneously receiving a second data chunk from the second memory system 102b. Because the host system 100 receives the plurality of data chunks from the first and second memory systems 102a, 102b in parallel, it will be appreciated that the host system 100 reads the file from the memory systems more quickly than if the host system 100 were to read all the data chunks that make up the file from the first memory system 102a or if the host system 100 were to read all the data chunks that make up the file from the second memory system 102b.
Additionally, storing a first copy of the plurality of data chunks of the file in the first memory system 102a and storing a second copy of the plurality of data chunks of the file in the second memory system 102b provides the host system 100 the ability to simultaneously perform two different functions with respect to the two memory systems. For example, the host system 100 may read data from the first memory system 102a while simultaneously writing data to the second memory system 102b. This provides the ability for the host system 100 to perform actions such as playing a video or music from data stored in the first memory system 102a while simultaneously downloading data to store in the second memory system 102b.
At step 704, the host stores a first copy of the plurality of data chunks to a first memory system, and at step 706, the host stores a second copy of the plurality of data chunks to a second memory system. In some implementations, the host device may be a cellular telephone, the first memory system may be an embedded flash memory, and the second memory system may be a removable memory card. However, in other implementations, different devices and/or memory configurations may be used.
At step 707, the host determines a need to read at least a portion of the file from the memory systems. At step 708, the host determines whether to read a data chunk for the file from the first memory system or the second memory system. In some implementations, the host may determine whether to read the data chunk for the file from the first memory system or the second memory system based on factors such as which memory system will be available first to provide the data chunk; when both the memory systems are available, which memory system is able to provide the data chunk to the host more quickly; whether the first or second memory system is storing a more recent version of the chunk of data; and/or any other performance factor associated with the first memory system and/or the second memory system that may assist the host in determining which memory system to read the data chunk from in order to increase performance.
For example, when a host determines which memory system will be available first to provide a data chunk, the host may examine whether one of the memory systems is currently booting up, whether an application is currently using one of the memory systems, whether the host is currently reading data from, or writing data to, one of the memory systems, whether one of the memories can provide faster performance, and/or any other factor that may indicate to the host that one of the memory systems may be available to provide data to the host prior to another memory system. In some implementations, the host will decide to read the data chunk from the memory system that will be available to the host first unless one of the memory systems is storing a more recent version of the data chunk.
In some instances, as explained in more detail below in conjunction with
At step 710, the host reads the data chunk from the identified memory system. At step 712, the host determines whether it needs to read additional data chunks from the memory systems to reassemble the required portion of the file. When the host determines that it does not need to read additional data chunks from the memory systems, at step 714, the host may reassemble at least a portion of the file from the data chunks read from the memory systems.
However, when the host determines it needs to read additional data chunks from the memory systems, the method loops to step 708 and the above-described method is repeated. The above-described method is repeated until the host determines at step 712 that it does not need to read additional data chunks from the memory systems, and at step 714, the host reassembles at least a portion of file from the data chunks read from the memory systems.
In the implementations described above, the first and second memory systems are described as being present in a system or device. However, when one of the memory systems is removable, such as when the first memory system is an embedded flash memory and the second memory system is a removable memory card, it will be appreciated that both memory systems may not always be present in a system. In order to account for this, the host may be configured to determine when each of the memory systems is present so that the host may perform parallel access to multiple storage devices when multiple storage device systems are present, and refrain from attempting to perform parallel access to multiple storage devices when multiple storage devices are not present in the system.
Further, while the methods described above are described with respect to two memory systems, it will be appreciated that similar methods may be implemented with a system comprising more than two memory systems. When employing more than two memory systems, each memory system would store a copy of the plurality of data chunks of a file such that when the host reads at least a portion of the file from the multiple memory systems, the host may simultaneously read a data chunk of the file from two or more of the memory systems.
As stated above, a host may periodically update one or more data chunks stored in the first memory system or the second memory system.
However, when the host determines at step 806 that only one of the memory systems is available, at step 810, the host writes the updated data chunks to the available memory system. At step 812, the host writes an indicator in a management table that indicates to the host that the updated data chunks stored in the memory system are a more recent version of the data chunks than the counterpart data chunks stored in other memory systems. For example, if the host determines that the first memory system is available, but the second memory systems is being utilized by another application, the host writes the updated data chunks to the first memory system. The host then writes an indicator to a management table associated with the first memory system that indicates that the updated data chunk in the first memory system is a more recent version of the data chunk than the counterpart data chunk stored in the second memory system.
At step 814, the host periodically determines whether the first and second memory systems are available to synchronize (“sync”) data between the memory systems. When the host determines that the first and second memory systems are available to sync data between the memory systems, at step 816, the host syncs the data chunks between the memory systems. At step 818, the host then removes any indicators that indicate data chunks in one of the memory systems are a more recent version of the data chunks than the counterpart data chunks stored in the other memory system.
When the host determines at step 814 that the first and second memory systems are not available to sync data between the memory systems, the host may continue to periodically check whether the first and second memory systems are available to sync data between the memory systems or loop to step 804 where the host determines a need to store one or more updated data chunks of the file to the first memory system and/or the second memory system. It will be appreciated that in some occurrences when there is a period of time when both memory systems are not available to sync the data between the two memory system, the first memory system may store some data chunks that are of a more recent version over the data chunks than the counterpart data chunks stored in the second memory, while the second memory system is simultaneously storing other data chunks that are of a more recent version of the data chunks than the counterpart data chunks stored in the first memory system.
While the implementations described above allow the host to store updated chunks of data in the first memory system or the second memory system depending on the availability of the first and second memory systems, in other implementations, the host may only stored updated chunks of data in one of the memory systems. For example, when the device is a cellular phone, the first memory system is an embedded flash memory, and the second memory system is a removable memory card, the host may only store updated chunks of data in the embedded flash memory.
Further the disclosed systems and methods provide a host system the ability to perform parallel processing without the addition of extra hardware. As discussed above, in order to implement parallel access to multiple storage devices, a processor of a host system may operate as a management layer that interfaces between the host system and the memory systems. The management layer operating on the processor of the host system controls the storing of data to, and reading of data from, the first and second memory systems. Because the management layer is operated on the processor of the host system, the disclosed systems and methods may operate parallel access to multiple storage devices without the use of additional hardware components.
It is intended that the foregoing detailed description be regarded as illustrative rather than limiting, and that it be understood that it is the following claims, including all equivalents, that are intended to define the spirit and scope of this invention.