This invention relates to flash-memory solid-state-drive (SSD) devices, and more particularly to hybrid mapping of single-level-cell (SLC) and multi-level-cell (MLC) flash systems.
Host systems such as Personal Computers (PC's) store large amounts of data in mass-storage devices such as hard disk drives (HDD). Mass-storage devices are sector-addressable rather than byte-addressable, since the smallest unit of flash memory that can be read or written is a page that is several 512-byte sectors in size. Flash memory is replacing hard disks and optical disks as the preferred mass-storage medium.
NAND flash memory is a type of flash memory constructed from electrically-erasable programmable read-only memory (EEPROM) cells, which have floating gate transistors. These cells use quantum-mechanical tunnel injection for writing and tunnel release for erasing. NAND flash is non-volatile so it is ideal for portable devices storing data. NAND flash tends to be denser and less expensive than NOR flash memory.
However, NAND flash has limitations. In the flash memory cells, the data is stored in binary terms—as ones (1) and zeros (0). One limitation of NAND flash is that when storing data (writing to flash), the flash can only write from ones (1) to zeros (0). When writing from zeros (0) to ones (1), the flash needs to be erased a “block” at a time. Although the smallest unit for read can be a byte or a word within a page, the smallest unit for erase is a block.
Single Level Cell (SLC) flash and Multi Level Cell (MLC) flash are two types of NAND flash. The erase block size of SLC flash may be 128 K+4 K bytes while the erase block size of MLC flash may be 256 K+8 K bytes. Another limitation is that NAND flash memory has a finite number of erase cycles between 10,000 and 100,000, after which the flash wears out and becomes unreliable.
Comparing MLC flash with SLC flash, MLC flash memory has advantages and disadvantages in consumer applications. In the cell technology, SLC flash stores a single bit of data per cell, whereas MLC flash stores two or more bits of data per cell. MLC flash can have twice or more the density of SLC flash with the same technology. But the performance, reliability and durability may decrease for MLC flash.
MLC flash has a higher storage density and is thus better for storing long sequences of data; yet the reliability of MLC is less than that of SLC flash. Data that is changed more frequently is better stored in SLC flash, since SLC is more reliable and rapidly-changing data is more likely to be critical data than slowly changing data. Also, smaller units of data may more easily be aggregated together into SLC than MLC, since SLC often has fewer restrictions on write sequences than does MLC.
A consumer may desire a large capacity flash-memory system, perhaps as a replacement for a hard disk. A solid-state disk (SSD) made from flash-memory chips has no moving parts and is thus more reliable than a rotating disk.
Several smaller flash drives could be connected together, such as by plugging many flash drives into a USB hub that is connected to one USB port on a host, but then these flash drives appear as separate drives to the host. For example, the host's operating system may assign each flash drive its own drive letter (D:, E:, F:, etc.) rather than aggregate them together as one logical drive, with one drive letter. A similar problem could occur with other bus protocols, such as Serial AT-Attachment (SATA), integrated device electronics (IDE), Serial small-computer system interface (SCSI) (SAS) bus, a fiber-channel bus, and Peripheral Components Interconnect Express (PCIe). The parent application, now U.S. Pat. No. 7,103,684, describes a single-chip controller that connects to several flash-memory mass-storage blocks.
Larger flash systems may use multiple channels to allow parallel access, improving performance. A wear-leveling algorithm allows the memory controller to remap logical addresses to any different physical addresses so that data writes can be evenly distributed. Thus the wear-leveling algorithm extends the endurance of the flash memory, especially MLC-type flash memory.
What is desired is a multi-channel flash system with flash memory on modules in each of the channels. It is desired to use both MLC and SLC flash memory in a hybrid system to maximize storage efficiency; however a MLC-only flash memory storage system with the hybrid mapping structure can also be benefit. A hybrid mapping structure is desirable to map logical addresses to physical blocks in both SLC and MLC flash memory. A hybrid mapping structure that also benefits SLC-only or MLC-only flash system is further desired. The hybrid mapping table can reduce the amount of costly SRAM required compared with an all-page-mapping method. It is further desired to allocate new host data to SLC flash when the data size is smaller and more likely to change, but to allocate new host data to MLC flash when the data is in a longer sequence and is less likely to be changed.
A smart storage switch is desired between the host and the multiple flash-memory modules so that data may be striped across the multiple channels. It is desired that the smart storage switch interleaves and stripes data accesses to the multiple channels of flash-memory devices.
The present invention relates to an improvement in hybrid MLC/SLC flash systems. The following description is presented to enable one of ordinary skill in the art to make and use the invention as provided in the context of a particular application and its requirements. Various modifications to the preferred embodiment will be apparent to those with skill in the art, and the general principles defined herein may be applied to other embodiments. Therefore, the present invention is not intended to be limited to the particular embodiments shown and described, but is to be accorded the widest scope consistent with the principles and novel features herein disclosed.
Virtual storage bridges 42, 43 are protocol bridges that also provide physical signaling, such as driving and receiving differential signals on any differential data lines of LBA storage bus interface 28, detecting or generating packet start or stop patterns, checking or generating checksums, and higher-level functions such as inserting or extracting device addresses and packet types and commands. The host address from host motherboard 10 contains a logical block address (LBA) that is sent over LBA storage bus interface 28, although this LBA may be stripped by smart storage switch 30 in some embodiments that perform ordering and distributing equal sized data to attached NVM flash memory 68 through NVM controller 76.
Buffers in SDRAM 60 coupled to virtual buffer bridge 32 can store the sector data when the host writes data to a MLCA disk, and temporally hold data while the host is fetching from flash memories. SDRAM 60 is a synchronous dynamic-random-access memory for smart storage switch 30. SDRAM 60 also can be used as temporary data storage or a cache for performing Write-Back, Write-Thru, or Read-Ahead Caching.
Virtual storage processor 140 provides striping services to smart storage transaction manager 36. For example, logical addresses from the host can be calculated and translated into logical block addresses (LBA) that are sent over LBA storage bus interface 28 to NVM flash memory 68 controlled by NVM controllers 76. Host data may be alternately assigned to flash memory in an interleaved fashion by virtual storage processor 140 or by smart storage transaction manager 36. NVM controller 76 may then perform a lower-level interleaving among NVM flash memory 68. Thus interleaving may be performed on two levels, both at a higher level by smart storage transaction manager 36 among two or more NVM controllers 76, and by each NVM controller 76 among NVM flash memory 68.
NVM controller 76 performs logical-to-physical remapping as part of a flash translation layer function, which converts LBA's received on LBA storage bus interface 28 to PBA's that address actual non-volatile memory blocks in NVM flash memory 68. NVM controller 76 may perform wear-leveling and bad-block remapping and other management functions at a lower level.
When operating in single-endpoint mode, smart storage transaction manager 36 not only buffers data using virtual buffer bridge 32, but can also re-order packets for transactions from the host. A transaction may have several packets, such as an initial command packet to start a memory read, a data packet from the memory device back to the host, and a handshake packet to end the transaction. Rather than have all packets for a first transaction complete before the next transaction begins, packets for the next transaction can be re-ordered by smart storage switch 30 and sent to NVM controller 76 before completion of the first transaction. This allows more time for memory access to occur for the next transaction. Transactions are thus overlapped by re-ordering packets.
Packets sent over LBA storage bus interface 28 are re-ordered relative to the packet order on host storage bus 18. Transaction manager 36 may overlap and interleave transactions to different NVM flash memory 68 controlled by NVM controllers 76, allowing for improved data throughput. For example, packets for several incoming host transactions are stored in SDRAM buffer 60 via virtual buffer bridge 32 or an associated buffer (not shown). Transaction manager 36 examines these buffered transactions and packets and re-orders the packets before sending them over internal bus 38 to virtual storage bridge 42, 43, then to one of the downstream flash storage blocks via NVM controllers 76.
A packet to begin a memory read of a flash block through bridge 43 may be re-ordered ahead of a packet ending a read of another flash block through bridge 42 to allow access to begin earlier for the second flash block.
Encryption and decryption of data may be performed by encryptor/decryptor 35 for data passing over host storage bus 18. Upstream interface 34 may be configured to divert data streams through encryptor/decryptor 35, which can be controlled by a software or hardware switch to enable or disable the function. This function can be an Advanced Encryption Standard (AES), IEEE 1667 standard, etc, which will authenticate the transient storage devices with the host system either through hardware or software programming. The methodology can be referenced to U.S. application Ser. No. 11/924,448, filed Oct. 25, 2007. Battery backup 47 can provide power to smart storage switch 30 when the primary power fails, allowing write data to be stored into flash. Thus a write-back caching scheme may be used with battery backup 47 rather than only a write-through scheme.
Hybrid mapper 46 in NVM controller 76 performs 1 level of mapping to NVM flash memory 68 that are MLC flash, or two levels of mapping to NVM flash memory 68 that are SLC flash. Data may be buffered in SDRAM 77 within NVM controller 76. Alternatively, NVM controller 76 and NVM flash memory 68 can be embedded with storage smart switch 30.
In
In
While the MLC device has four states shown in
Alternately, states 00 and 10 could be used, while states 01 and 11 are not used. State 00 emulates a SLC 0 bit, while state 10 emulates a SLC 1 bit. This may be done by programming either one page out of two pages shared by single MLC cell (sych as 00 to 01 state to improve programming time or 00 to 10 state to improve noise margin). Alternatively, both pages can be repeatedly programmed with same data bits (00 and 11 states used) to improve the data retention but sacrifice the programming time.
Thus a MLC flash device may be operated in such a way to emulate a SLC flash device. Data reliability is improved since fewer MLC states are used, and noise margins may be relaxed. A hybrid system may have both SLC and MLC flash devices, or it may have only MLC flash devices, but operate some of those MLC devices in a SLC-emulation mode. Data thought to be more critical may be stored in SLC, while less-critical data may be stored in MLC.
Data from flash memory may be transferred to SDRAM buffer 410 by motherboard system controller using both volatile memory controller 408 and non-volatile memory controller 406. A direct-memory access (DMA) controller may be used for these transfers, or CPU 402 may be used. Non-volatile memory controller 406 may read and write to flash memory modules 414. DAM may also access NVMD 412 which are controlled by smart storage switch 30.
NVMD 412 contain both NVM controller 76 and flash memory chips 68 as shown in
Metal contact pads 112 are positioned along the bottom edge of the module on both front and back surfaces. Metal contact pads 112 mate with pads on a module socket to electrically connect the module to a PC motherboard. Holes 116 are present on some kinds of modules to ensure that the module is correctly positioned in the socket. Notches 114 also ensure correct insertion and alignment of the module. Notches 114 can prevent the wrong type of module from being inserted by mistake. Capacitors or other discrete components are surface-mounted on the substrate to filter noise from NVMD 412, which are also mounted using a surface-mount-technology SMT process.
Flash module 110 connects NVMD 412 to metal contact pads 112. The connection to flash module 110 is through a logical bus LBA or through LBA storage bus interface 28. Flash memory chips 68 and NVM controller 76 of
Metal contact pads 112 form a connection to a flash controller, such as non-volatile memory controller 406 in
Metal contact pads 112′ are positioned along the bottom edge of the module on both front and back surfaces. Metal contact pads 112′ mate with pads on a module socket to electrically connect the module to a PC motherboard. Holes 116 are present on some kinds of modules to ensure that the module is correctly positioned in the socket. Notches 114 also ensure correct insertion of the module. Capacitors or other discrete components are surface-mounted on the substrate to filter noise from NVMD 412 and smart storage switch 30.
Since flash module 73 has smart storage switch 30 mounted on it's substrate, NVMD 412 do not directly connect to metal contact pads 112′. Instead, NVMD 412 connect using wiring traces to smart storage switch 30, then smart storage switch 30 connects to metal contact pads 112′. The connection to flash module 73 is through a LBA storage bus interface 28 from controller 404, such as shown in
Optional power connector 45 is located on PCIe card 300 to supply power for pluggable NVMD 368 and an expansion daughter card in case of the power from the connector 312 cannot provide enough power. Battery backup 47 can be soldered in or attached to PCIe card 300 to supply power to PCIe card 300, slots 304, and connector 305 in case of sudden power loss.
In
When the host SC is less than or equal to the threshold SC, page-mode mapping is used for this data, and the data is written to SLC flash. The data is assumed to be more critical or more likely to be changed in the future when the SC is small. For example, critical system files such as directories of files may change just a few entries and this have a small sector count. Also, small pieces of data have a small sector count, and may be stored with other unrelated data when packed into a larger block. Using SLC better allows for such packing by the Smart storage switch.
Since there are many pages in a block, page-mode mapping provides a finer granularity than does block-mode mapping. Thus critical, small data is page-mapped into more reliable SLC flash memory, while less-critical and long sequences of data is block-mapped into cheaper, denser MLC flash memory. Long sequences of data (large SC) are block-mapped into MLC, while short data sequences (small SC) are page-mapped into SLC.
In
However, when the stored FC exceeds the threshold in register 15, the data is moved to SLC and the block-mapped entry is replaced with a page-mapped entry. Thus frequently-accessed data is eventually moved to SLC flash. This method is more precise than that of
When an existing entry is found in the mapping tables, step 204, and the mapping entry indicates that this data is mapped to a SLC flash memory, step 206, then page mode is selected, step 214, and the 2-level mapping tables are used to find the physical-block address (PBA) to write the data to in MLC flash memory, step 216.
When an existing entry is found in the mapping tables, step 204, and the mapping entry indicates that this data is mapped to a MLC flash memory, step 206, then the frequency counter (FC) is examined, step 208. When the FC is less than the FC threshold, step 208, then block mode is selected for this new data, step 210. The data is written to MLC flash, step 212 and a 1-level mapping entry is used.
When the FC exceeds the FC threshold, step 208, then page mode is selected for this new data, step 220. The data for this block is relocated from MLC flash memory to SLC flash memory, and a new entry loaded into two levels of the mapping table, step 218. The data is now accessible and mapable in page units rather than in the larger block units.
A host write command is passed through smart storage switch 30 to the NVM controller 76 (
When the sector count does not exceed the threshold SC, step 238, page mode is selected for this new data, step 232. A 2-level page entry is loaded into the mapping table, step 234, and the data is written to SLC flash memory.
When an existing entry is found in the mapping tables, step 234, the mapping tables are read for the host's LBA, and the method already indicated in the mapping tables is used to select either page-mode or block mode, step 230. The data is written to SLC flash if earlier data was written to SLC flash, while the data is written to MLC if earlier data was written to MLC, as indicated by the existing mapping-table entry.
When the selected entry has B/P set, block mode is indicated, and the physical-block address (PBA) is read from this entry in first-level mapping table 20. The PBA points to a whole physical block in MLC flash memory.
When the selected entry has B/P cleared, page mode is indicated. A virtual LBA (VLBA) in a range of 0 to the maximum allocated block number assigned sequentially from 0 for page mode is read from the selected entry in first-level mapping table 20. Each VLBA has its own second-level mapping table 22. This VLBA together with a page offset (PO) from the LSA points to an entry in second-level mapping table 22. The content pointed to by the entry in second-level mapping table 22 contains the physical-block address (PBA), which is newly assigned from one of available empty blocks with the smallest wear-leveling count, and a page number. The PBA and page number are read from this entry in second-level mapping table 22. The PBA points to a whole physical block in SLC flash memory while the page number selects a page within that block. The page number is newly assigned from the blank page having the minimum page number in the PBA. The page number in the content pointed to by the entry may be different from the PO from LSA.
The granularity of each entry in second-level mapping table 22 maps just one page of data, while the granularity of each entry in first-level mapping table 20 maps a whole block of data pool. Since there may be 4, 8, 16, 128, 256, or some other number of pages per block, there are many entries in second-level mapping table 22 needed to completely map a block that is in page mode. However, only one entry in first-level mapping table 20 is needed for a whole block of data pool. Thus block mode uses the storage space of SRAM for mapping tables 20, 22 much more efficiently than does page mode.
If unlimited memory were available for mapping tables 20, 22, all data could be page mapped. However, entries for first-level mapping table 20 and second-level mapping table 22 are stored in SRAM in NVM controller 76, or smart storage switch 30. The storage space available for mapping entries is thus limited. The hybrid mapping system allocates only about 20% of the entries for use as page entries in second-level mapping table 22, while 80% of the entries are block entries in first-level mapping table 20. Thus storage required for the mapping tables is only about 20% (compared to page-based mapping table) while providing the benefit of page-granularity mapping for more critical data. This flexible hybrid mapping approach is storage-efficient yet provides the benefit of page-based mapping where needed.
When the district number from the LSA matches the district number of all the entries in first-level mapping table 20, the LBA from the LSA selects an entry in first-level mapping table 20. When B/P indicates Block mode, the PBA is read from this selected entry and forms part of the physical address, along with the page number and sector numbers from the LSA. The PBA may have more address bits than the LBA, allowing the district to be mapped to any part of the physical flash memory.
In
The PBA and the physical page number are read from this selected entry in second-level mapping table 22 and forms part of the physical address, along with the sector number from the LSA. Thus both the block and the page are remapped using two levels of mapping tables 20, 22.
In
In
The PBA and the physical zone number are read from this selected entry in second-level mapping table 22 and form part of the physical address, along with the page number and sector number from the LSA. Thus both the block and the zone are remapped using two levels of mapping tables 20, 22. Fewer mapping entries are needed with zone-mode than for page-mode, since each zone is multiple pages.
LBA32 1 from the host LSA selects entry 1 in first-level mapping table 20. Since the sector count SC is less than the threshold of 4, page mode is selected. VLBA0 is read from this selected entry and selects a table of entries in second-level mapping table 22. The page number from the host LSA (=1) selects page 1 in this second level table, and PBA=0 is read from the entry to locate the physical block PBA0 in NVM flash memory 68. The page number stored in the selected entry in second-level mapping table 22 selects the page in PBA0, page P0. The sector data from the host is written to the second, third, and fourth sectors in page P0 of block PBA0 and shown as sectors 21, 22, 23 in
In
In
While sectors 28-31 were written to SLC flash, sectors 31-45 were written to MLC flash. The host write of sectors 28-45 was performed in two phases shown in
In
In
In
Empty page P0 is selected to receive new sectors 21-23. The new data for sectors 21-23 are written to page P0, and entry PI in second-level mapping table 22 is loaded with PBA1, P0 to point to the fresh data in page 0. The sequence number increases to 3.
In
The mapping tables are already loaded for district 2; however, no entries exist for LBA=1. LBA=1 selects entry LBA1 in first-level mapping table 20, which is initially empty. A new empty physical block is found, such as from a pool of empty blocks, with PBA498 selected. The address of PBA498 is written to entry LBA1 in first-level mapping table 20, and the block bit B is set to indicate it is in block mode, since SC is larger than the threshold. Sectors 1-10 of host data are written to pages 1, 2, 3 of PBA498, as
In
The mapping tables are already loaded with an entry for LBA=1. A new empty physical block is found for storing second-level mapping table 22 and the sector data, PBA8, from the pool of empty SLC blocks. The address of PBA8 is written to the page-PBA field (VLBA field in
The first page in PBA8 is selected to receive the sector data, and sectors 0-3 of host data are written to page 0 of PBA8, and the spare area of PBA8 page 0 is written with the LBA, B/P bit, and sequence number. The page 0 entry in second-level mapping table 22 is also written with the LBA and sequence number. Second-level mapping table 22 is stored in SRAM but corresponds to the same page in NVM flash memory 68. Pages in page mode are sequentially addressed and programmed. The sequence number is incremented to 1 since this is a previous page-hit case in block mode for block PBA498.
In
The mapping tables are already loaded with an entry for LBA=1. The page-mode bit P is set for this entry, so PBA8 is selected and locates entries in second-level mapping table 22 for PBA8. The next empty page entry in second-level mapping table 22 is selected, page P1, and loaded with the LBA and sequence number. Sectors 8-10 of host data are written to page 1 of PBA8, and the spare area is written with the LBA, B/P bit, and sequence number. The sequence number is also incremented since a hit case happens compared to the contents of PBA498 page 3.
In
The mapping tables are already loaded with an entry for LBA=1. The page-mode bit P is set for this entry, so PBA8 is selected and locates entries in second-level mapping table 22 for PBA8. The next empty page entry in second-level mapping table 22 is selected, page P2, and loaded with the LBA and sequence number. Sectors 0-3 of host data are written to page 2 of PBA8, and the spare area is written with the LBA, B/P bit, and sequence number, which is incremented to show that the data in page 0 is stale, since the level-2 mapping table with the previous entry 1,1 has already been occupied.
In
In
In
Entry LBA1 in first-level mapping table 20 is read, and PBA8 points to second-level mapping table 22. The entries in second-level mapping table 22 are examined and entry P1 is found that stores data for logical page 3. The sequence number in entry P1 in second-level mapping table 22 is 1, which is larger than the sequence number of 0 for these same sectors in PBA498. Sectors 8-10 are read from page 1 of PBA8 in NVM flash memory 68 and sent to the host.
A host write command is passed through smart storage switch 30 to the NVM controller 76 (
When an existing entry is found in the mapping tables, step 204, and the mapping entry indicates that this data is mapped to a SLC flash memory, step 206, then page mode is selected, step 214, and the 2-level mapping tables are used to find the physical-block address (PBA) to write the data to in SLC flash memory, step 216.
When an existing entry is found in the mapping tables, step 204, and the mapping entry indicates that this data is mapped to a MLC flash memory, step 206, then the frequency counter (FC) is examined, step 208. When the FC is less than the FC threshold, step 208, then block mode remains selected for this new data. The data is written to MLC flash, step 205 using the existing 1-level mapping entry.
When the FC exceeds the FC threshold, step 208, then page mode is selected for this new data, step 220. The data for this block is relocated from MLC flash memory to SLC flash memory, and a new entry loaded into two levels of the mapping table, step 218. The data is now accessible and mapable in page units rather than in the larger block units.
When an existing entry is not found in the mapping tables, step 204, and SC is greater than the SC threshold, step 238, then block mode is selected, step 236, for this new data. The data is written to MLC flash, step 238 using the 1-level mapping entry. When an existing entry is not found in the mapping tables, step 204, and SC is smaller than the SC threshold, step 238, then page mode is selected, step 232, for this new data. The data is written to SLC flash, step 234 using the 2-level mapping entry.
The starting address from the host is adjusted for each dispatch to NVMD. Multiple commands are then dispatched from smart storage switch 30 to NVM controllers 76, step 258.
In
A modified header and page 1 are first dispatched to NVMD 1, then another header and page 2 are dispatched to NVMD 2, then another header and page 3 are dispatched to NVMD 3, then another header and page 4 are dispatched to NVMD 4. This is the first stripe. Then another header and page 5 are dispatched to NVMD 1, another header and page 6 are dispatched to NVMD 2, etc. The stripe size may be optimized so that each NVMD is able to read or write near their maximum rate.
In
A modified header and four pages are dispatched together to each channel. The stripe boundary is at 4×4 or 16 pages.
In
After the host data is stored in SDRAM 60, smart storage switch 30 issues a DMA write command to NVMD 412. The NVM controller returns a DMA acknowledgement, and then smart storage switch 30 sends the data stored in SDRAM 60. The data is buffered in the SDRAM buffer 77 in NVM controller 76 or another buffer and then written to flash memory. Once the data has been written to flash memory, a successful completion status back to smart storage switch 30. The internal DMA write is complete from the viewpoint of smart storage switch 30. The access time of smart storage switch 30 is relatively longer due to write-through mode. However, this access time is hidden from host motherboard 10.
In
After the host data is stored in SDRAM 60, smart storage switch 30 issues a DMA write command to NVMD 412. The NVM controller returns a DMA acknowledgement, and then smart storage switch 30 sends the data stored in SDRAM 60. The data is stored in the SDRAM buffer 77 in NVM controller 76 (
Smart storage switch 30 issues a successful completion status back to host motherboard 10. The DMA write is complete from the viewpoint of host motherboard 10, and the host access time is relatively long.
In
In this case, smart storage switch 30 found no cache hit in SDRAM 60. SDRAM 60 then issues a DMA read command to NVMD 412. In this case, the NVM controller found cache hit, then reads the data from its cache, SDRAM buffer 77 in NVM controller 76 (
NVMD 412 sends a successful completion status back to smart storage switch 30. The internal DMA read is complete from the viewpoint of smart storage switch 30. Smart storage switch 30 issues a successful completion status back to host motherboard 10. The DMA read is complete from the viewpoint of host motherboard 10. The host access time is relatively long, but is much shorter than if flash memory had to be read.
Several other embodiments are contemplated by the inventors. For example. While storing page-mode-mapped data into SLC flash memory has been described, this SLC flash memory may be a MLC flash memory that is emulating SLC, such has shown in
Alternatively, NVMD 412 can be one of the following: a block mode mapper with hybrid SLC/MLC flash memory, a block mode mapper with SLC or MLC, a page mode mapper with hybrid MLC/SLC flash memory, a page mode mapper with SLC or MLC. Alternatively, NVMD 412 in flash module 110 can include raw flash memory chips. NVMD 412 and smart storage switch 30 in flash module 73 can include raw flash memory chips and a flash controller as shown in
The hybrid mapping tables require less space in SRAM that a pure page-mode mapping table since only about 20% of the block are fully page mapped; the other 80% of the blocks are block-mapped, which requires much less storage than page-mapping. Copying of blocks for relocation is less frequent with page mapping since the sequential-writing rules of the MLC flash are violated less often in page mode than in block mode. This increases the endurance of the flash system and increases performance.
The mapping tables may be located in an extended address space, and may use virtual addresses or illegal addresses that are greater than the largest address in a user address space. Pages may remain in the host's page order or may be remapped to any page location. Rather than store a separate B/P bit, an extra address bit may be used, such as a MSB of the PBA stored for an entry. Other encodings are possible.
Many variations of
The flash memory may be embedded on a motherboard or SSD board or could be on separate modules. Capacitors, buffers, resistors, and other components may be added. Smart storage switch 30 may be integrated on the motherboard or on a separate board or module. NVM controller 76 can be integrated with smart storage switch 30 or with raw-NAND flash memory chips as a single-chip device or a plug-in module or board. In
Using multiple levels of controllers, such as in a president-governor arrangement of controllers, the controllers in smart storage switch 30 may be less complex than would be required for a single level of control for wear-leveling, bad-block management, re-mapping, caching, power management, etc. Since lower-level functions are performed among flash memory chips 68 within each flash module by NVM controllers 76 as a governor function, the president function in smart storage switch 30 can be simplified. Less expensive hardware may be used in smart storage switch 30, such as using an 8051 processor for virtual storage processor 140 or smart storage transaction manager 36, rather than a more expensive processor core such as a an Advanced RISC Machine ARM-9 CPU core.
Different numbers and arrangements of flash storage blocks can connect to the smart storage switch. Rather than use LBA storage bus interface 28 or differential serial packet buses, other serial buses such as synchronous Double-Data-Rate (DDR), a differential serial packet data bus, a legacy flash interface, etc.
Mode logic could sense the state of a pin only at power-on rather than sense the state of a dedicated pin. A certain combination or sequence of states of pins could be used to initiate a mode change, or an internal register such as a configuration register could set the mode. A multi-bus-protocol chip could have an additional personality pin to select which serial-bus interface to use, or could have programmable registers that set the mode to hub or switch mode.
The transaction manager and its controllers and functions can be implemented in a variety of ways. Functions can be programmed and executed by a CPU or other processor, or can be implemented in dedicated hardware, firmware, or in some combination. Many partitionings of the functions can be substituted. Smart storage switch 30 may be hardware, or may include firmware or software or combinations thereof.
Overall system reliability is greatly improved by employing Parity/ECC with multiple NVM controllers 76, and distributing data segments into a plurality of NVM blocks. However, it may require the usage of a CPU engine with a DDR/SDRAM cache in order to meet the computing power requirement of the complex ECC/Parity calculation and generation. Another benefit is that, even if one flash block or flash module is damaged, data may be recoverable, or the smart storage switch can initiate a “Fault Recovery” or “Auto-Rebuild” process to insert a new flash module, and to recover or to rebuild the “Lost” or “Damaged” data. The overall system fault tolerance is significantly improved.
Wider or narrower data buses and flash-memory chips could be substituted, such as with 16 or 32-bit data channels. Alternate bus architectures with nested or segmented buses could be used internal or external to the smart storage switch. Two or more internal buses can be used in the smart storage switch to increase throughput. More complex switch fabrics can be substituted for the internal or external bus.
Data striping can be done in a variety of ways, as can parity and error-correction code (ECC). Packet re-ordering can be adjusted depending on the data arrangement used to prevent re-ordering for overlapping memory locations. The smart switch can be integrated with other components or can be a stand-alone chip.
Additional pipeline or temporary buffers and FIFO's could be added. For example, a host FIFO in smart storage switch 30 may be may be part of smart storage transaction manager 36, or may be stored in SDRAM 60. Separate page buffers could be provided in each channel. A clock source could be added.
A single package, a single chip, or a multi-chip package may contain one or more of the plurality of channels of flash memory and/or the smart storage switch.
A MLC-based flash module may have four MLC flash chips with two parallel data channels, but different combinations may be used to form other flash modules, for example, four, eight or more data channels, or eight, sixteen or more MLC chips. The flash modules and channels may be in chains, branches, or arrays. For example, a branch of 4 flash modules could connect as a chain to smart storage switch 30. Other size aggregation or partition schemes may be used for different access of the memory. Flash memory, a phase-change memory (PCM), or ferroelectric random-access memory (FRAM), Magnetoresistive RAM (MRAM), Memristor, PRAM, SONOS, Resistive RAM (RRAM), Racetrack memory, and nano RAM (NRAM) may be used.
The host can be a PC motherboard or other PC platform, a mobile communication device, a personal digital assistant (PDA), a digital camera, a combination device, or other device. The host bus or host-device interface can be SATA, PCIE, SD, USB, or other host bus, while the internal bus to a flash module can be PATA, multi-channel SSD using multiple SD/MMC, compact flash (CF), USB, or other interfaces in parallel. A flash module could be a standard PCB or may be a multi-chip modules packaged in a TSOP, BGA, LGA, COB, PIP, SIP, CSP, POP, or Multi-Chip-Package (MCP) packages and may include raw-NAND flash memory chips or raw-NAND flash memory chips may be in separate flash chips, or other kinds of NVM flash memory 68. The internal bus may be fully or partially shared or may be separate buses. The SSD system may use a circuit board with other components such as LED indicators, capacitors, resistors, etc.
Directional terms such as upper, lower, up, down, top, bottom, etc. are relative and changeable as the system or data is rotated, flipped over, etc. These terms are useful for describing the device but are not intended to be absolutes.
NVM flash memory 68 may be on a flash module that may have a packaged controller and flash die in a single chip package that can be integrated either onto a PCBA, or directly onto the motherboard to further simplify the assembly, lower the manufacturing cost and reduce the overall thickness. Flash chips could also be used with other embodiments including the open frame cards.
Rather than use smart storage switch 30 only for flash-memory storage, additional features may be added. For example, a music player may include a controller for playing audio from MP3 data stored in the flash memory. An audio jack may be added to the device to allow a user to plug in headphones to listen to the music. A wireless transmitter such as a BlueTooth transmitter may be added to the device to connect to wireless headphones rather than using the audio jack. Infrared transmitters such as for IRDA may also be added. A BlueTooth transceiver to a wireless mouse, PDA, keyboard, printer, digital camera, MP3 player, or other wireless device may also be added. The BlueTooth transceiver could replace the connector as the primary connector. A Bluetooth adapter device could have a connector, a RF (Radio Frequency) transceiver, a baseband controller, an antenna, a flash memory (EEPROM), a voltage regulator, a crystal, a LED (Light Emitted Diode), resistors, capacitors and inductors. These components may be mounted on the PCB before being enclosed into a plastic or metallic enclosure.
The background of the invention section may contain background information about the problem or environment of the invention rather than describe prior art by others. Thus inclusion of material in the background section is not an admission of prior art by the Applicant.
Any methods or processes described herein are machine-implemented or computer-implemented and are intended to be performed by machine, computer, or other device and are not intended to be performed solely by humans without such machine assistance. Tangible results generated may include reports or other machine-generated displays on display devices such as computer monitors, projection devices, audio-generating devices, and related media devices, and may include hardcopy printouts that are also machine-generated. Computer control of other machines is another tangible result.
Any advantages and benefits described may not apply to all embodiments of the invention. When the word “means” is recited in a claim element, Applicant intends for the claim element to fall under 35 USC Sect. 112, paragraph 6. Often a label of one or more words precedes the word “means”. The word or words preceding the word “means” is a label intended to ease referencing of claim elements and is not intended to convey a structural limitation. Such means-plus-function claims are intended to cover not only the structures described herein for performing the function and their structural equivalents, but also equivalent structures. For example, although a nail and a screw have different structures, they are equivalent structures since they both perform the function of fastening. Claims that do not use the word “means” are not intended to fall under 35 USC Sect. 112, paragraph 6. Signals are typically electronic signals, but may be optical signals such as can be carried over a fiber optic line.
The foregoing description of the embodiments of the invention has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. It is intended that the scope of the invention be limited not by this detailed description, but rather by the claims appended hereto.
This application is a CIP of co-pending U.S. patent application for “Command Queuing Smart Storage Transfer Manager for Striping Data to Raw-NAND Flash Modules”, Ser. No. 12/252,155, filed Oct. 15, 2008. This application is a continuation-in-part (CIP) of “Multi-Level Controller with Smart Storage Transfer Manager for Interleaving Multiple Single-Chip Flash Memory Devices”, U.S. Ser. No. 12/186,471, filed Aug. 5, 2008. This application is a continuation-in-part (CIP) of co-pending U.S. patent application for “Single-Chip Multi-Media Card/Secure Digital controller Reading Power-on Boot Code from Integrated Flash Memory for User Storage”, Ser. No. 12/128,916, filed on May 29, 2008, which is a continuation of U.S. patent application for “Single-Chip Multi-Media Card/Secure Digital controller Reading Power-on Boot Code from Integrated Flash Memory for User Storage”, Ser. No. 11/309,594, filed on Aug. 28, 2006, now issued as U.S. Pat. No. 7,383,362, which is a CIP of U.S. patent application for “Single-Chip USB Controller Reading Power-On Boot Code from Integrated Flash Memory for User Storage”, Ser. No. 10/707,277, filed on Dec. 2, 2003, now issued as U.S. Pat. No. 7,103,684. This application is also a CIP of co-pending U.S. patent application for “Reliability High Endurance Non-Volatile Memory Device with Zone-Based Non-Volatile Memory File System”, Ser. No. 12/101,877, filed Apr. 11, 2008. This application is also a CIP of co-pending U.S. patent application for “Hybrid SSD Using a Combination of SLC and MLC Flash Memory Arrays”, U.S. application Ser. No. 11/926,743, filed Oct. 29, 2007. This application is also a CIP of co-pending U.S. patent application for “Methods and systems of managing memory addresses in a large capacity multi-level cell (MLC) based flash memory device”, U.S. application Ser. No. 12/025,706, filed Feb. 4, 2008. This application is also a CIP of co-pending U.S. patent application for “Portable Electronic Storage Devices with Hardware Security Based on Advanced Encryption Standard”, U.S. application Ser. No. 11/924,448, filed Oct. 25, 2007.
Number | Date | Country | |
---|---|---|---|
Parent | 12252155 | Oct 2008 | US |
Child | 12418550 | US | |
Parent | 10707277 | Dec 2003 | US |
Child | 12252155 | US | |
Parent | 12101877 | Apr 2008 | US |
Child | 10707277 | US | |
Parent | 11926743 | Oct 2007 | US |
Child | 12101877 | US | |
Parent | 11924448 | Oct 2007 | US |
Child | 11926743 | US | |
Parent | 12025706 | Feb 2008 | US |
Child | 11924448 | US | |
Parent | 12128916 | May 2008 | US |
Child | 12025706 | US | |
Parent | 11309594 | Aug 2006 | US |
Child | 12128916 | US | |
Parent | 12186471 | Aug 2008 | US |
Child | 11309594 | US |