The invention relates to data storage using non-volatile memory (NVM), more particularly to high performance and endurance NVM based storage systems.
Traditionally, hard disk drives have been used as data storage in a computing device. With advance of non-volatile memory (e.g., NAND flash memory), some attempts have been made to use non-volatile memory as the data storage.
Advantages of using NAND flash memory as data storage over hard disk drive are as follows:
However, there are shortcomings of using non-volatile memory as data storage. First problem is related to performance, NAND flash memory can only be accessed (i.e., read and/or programmed(written)) in data chunks (e.g., 512-byte data sector) instead of bytes. In addition, NAND flash memory needs to be erased before any new data can be written into, and data erasure operations can only be carried out in data blocks (e.g., 128 k-byte, 256 k-byte, etc.). All of the valid data in a data block must be copied to a new allocated block before any erasure operation thereby causing performance slow down. The characteristics of data programming and erasure not only makes NAND flash memory cumbersome to control (i.e., requiring a complex controller and associated firmware), but also difficult to realize the advantage of higher accessing speed over the hard disk drive (e.g., frequent out-of sequence updating in a file may result into many repeated data copy/erasure operations).
Another problem in NAND flash memory relates to endurance. Unlike hard disk drives, NAND flash memories have a life span measuring by limited number of erasure/programming cycles. As a result, one key goal of using NAND flash memories as data storage to replace hard disk drives is to avoid data erasure/programming as much as possible.
It would be desirable, therefore, to have an improved non-volatile memory based storage system that can overcome shortcomings described herein.
This section is for the purpose of summarizing some aspects of the present invention and to briefly introduce some preferred embodiments. Simplifications or omissions in this section as well as in the abstract and the title herein may be made to avoid obscuring the purpose of the section. Such simplifications or omissions are not intended to limit the scope of the present invention.
High performance and endurance non-volatile memory (NVM) based storage systems are disclosed. According to one aspect of the present invention, a NVM based storage system comprises at least one intelligent NVM device, an internal bus, at least one intelligent NVM device controller, a hub timing controller, a central processing unit, a data dispatcher and a storage protocol interface bridge. The intelligent NVM device includes a control interface logic and NVM. The control interface logic is configured to receive commands, logical addresses, data and timing signals from corresponding one of the at least one intelligent NVM device controller. Logical-to-physical address conversion can be performed within the control interface logic, thereby eliminating the need of address conversion in a storage system level controller (e.g., NVM based storage system). This feature also enables distributed address mappings instead of centralized prior art approaches. The data dispatcher together with the hub timing controller is configured for dispatching commands and sending relevant timing clock cycle to each of the at least one NVM device controller via the internal bus to enable interleaved parallel data transfer operations. The storage protocol interface bridge is configured for receiving data transfer commands from a host computer system via an external storage interface. An intelligent NVM device can be implemented as a single chip, which may include, but not be limited to, a product-in-package, a device-on-device package, a device-on-silicon package, or a multi-die package.
According to another aspect of the present invention, a volatile memory buffer together with corresponding volatile memory controller and phase-locked loop (PLL) circuit is also included in a NVM based storage system. The volatile memory buffer is partitioned to two parts: a command queue and one or more page buffers. The command queue is configured to hold received data transfer commands received by the storage protocol interface bridge, while the page buffers are configured to hold transition data to be transmitted between the host computer and the at least one NVM device. PLL circuit is configured for providing timing clock to the volatile memory buffer.
According to yet another aspect of the present invention, the volatile memory buffer allows data write commands with overlapped target address to be merged in the volatile memory buffer before writing to the at least one NVM device, thereby reducing repeated data programming or writing into same area of the NVM device. As a result, endurance of the NVM based storage system is increased due to less numbers of data programming.
According to yet another aspect, the volatile memory buffer allows preloading of data to anticipate requested data in certain data read commands hence increasing performance of the NVM based storage system.
According to yet another aspect, when a volatile memory buffer is included in a NVM based storage system, the system needs to monitor unexpected power failure. Upon detecting such power failure, the stored commands in the command queue along with the data in the page buffers must be stored in a special location using reserved electric energy stored in a designated capacitor. The special location is a reserved area of the NVM device, for example, the last physical block of the NVM device. The command queue is so sized that limited amount of electric energy stored in the designated capacitor can be used for copying all of the stored data to the reserved area. In order to further maximize the capacity of the command queue, emergency data dump is performed without address conversion.
According to still another aspect, after unexpected power failure, a NVM based storage system can restore its volatile memory buffer by copying the data from the reserved area of the NVM to the volatile memory buffer.
Other objects, features, and advantages of the present invention will become apparent upon examining the following detailed description of an embodiment thereof, taken in conjunction with the attached drawings.
These and other features, aspects, and advantages of the present invention will be better understood with regard to the following description, appended claims, and accompanying drawings as follows:
In the following description, numerous details are set forth to provide a more thorough explanation of embodiments of the present invention. It will be apparent, however, to one skilled in the art, that embodiments of the present invention may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form, rather than in detail, in order to avoid obscuring embodiments of the present invention.
Reference herein to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. Further, the order of blocks in process flowcharts or diagrams representing one or more embodiments of the invention do not inherently indicate any particular order nor imply any limitations in the invention.
Embodiments of the present invention are discussed herein with reference to
The card body 100 is configured for providing electrical and mechanical connection for the processing unit 102, the memory device 103, the I/O interface circuit 105, and all of the optional components. The card body 100 may comprise a printed circuit board (PCB) or an equivalent substrate such that all of the components as integrated circuits may be mounted thereon. The substrate may be manufactured using surface mount technology (SMT) or chip on board (COB) technology.
The processing unit 102 and the I/O interface circuit 105 are collectively configured to provide various control functions (e.g., data read, write and erase transactions) of the memory device 103. The processing unit 102 may also be a standalone microprocessor or microcontroller, for example, an 8051, 8052, or 80286 Intel® microprocessor, or ARM®, MIPS® or other equivalent digital signal processor. The processing unit 102 and the I/O interface circuit 105 may be made in a single integrated circuit, for application specific integrated circuit (ASIC).
The memory device 103 may comprise one or more non-volatile memory (e.g., flash memory) chips or integrated circuits. The flash memory chips may be single-level cell (SLC) or multi-level cell (MLC) based. In SLC flash memory, each cell holds one bit of information, while more than one bit (e.g., 2, 4 or more bits) are stored in a MLC flash memory cell. A detail data structure of an exemplary flash memory is described and shown in
The fingerprint sensor 104 is mounted on the card body 100, and is adapted to scan a fingerprint of a user of the first electronic flash memory device 100 to generate fingerprint scan data. Details of the fingerprint sensor 104 are shown and described in a co-inventor's U.S. Pat. No. 7,257,714, entitled “Electronic Data Storage Medium with Fingerprint Verification Capability” issued on Aug. 14, 2007, the entire content of which is incorporated herein by reference.
The memory device 103 stores, in a known manner therein, one or more data files, a reference password, and the fingerprint reference data obtained by scanning a fingerprint of one or more authorized users of the first flash memory device. Only authorized users can access the stored data files. The data file can be a picture file, a text file or any other file. Since the electronic data storage compares fingerprint scan data obtained by scanning a fingerprint of a user of the device with the fingerprint reference data in the memory device to verify if the user is the assigned user, the electronic data storage can only be used by the assigned user so as to reduce the risks involved when the electronic data storage is stolen or misplaced.
The input/output interface circuit 105 is mounted on the card body 100, and can be activated so as to establish communication with the motherboard 109 by way of an appropriate socket via an interface bus 113. The input/output interface circuit 105 may include circuits and control logic associated with a Universal Serial Bus (USB) interface structure that is connectable to an associated socket connected to or mounted on the motherboard 109. The input/output interface circuit 105 may also be other interfaces including, but not limited to, Secure Digital (SD) interface circuit, Micro SD interface circuit, Multi-Media Card (MMC) interface circuit, Compact Flash (CF) interface circuit, Memory Stick (MS) interface circuit, PCI-Express interface circuit, a Integrated Drive Electronics (IDE) interface circuit, Serial Advanced Technology Attachment (SATA) interface circuit, external SATA, Radio Frequency Identification (RFID) interface circuit, fiber channel interface circuit, optical connection interface circuit.
The processing unit 102 is controlled by a software program module (e.g., a firmware (FW)), which may be stored partially in a ROM (not shown) such that processing unit 102 is operable selectively in: (1) a data programming or write mode, where the processing unit 102 activates the input/output interface circuit 105 to receive data from the motherboard 109 and/or the fingerprint reference data from fingerprint sensor 104 under the control of the motherboard 109, and store the data and/or the fingerprint reference data in the memory device 103; (2) a data retrieving or read mode, where the processing unit 102 activates the input/output interface circuit 105 to transmit data stored in the memory device 103 to the motherboard 109; or (3) a data resetting or erasing mode, where data in stale data blocks are erased or reset from the memory device 103. In operation, motherboard 109 sends write and read data transfer requests to the first flash memory device 100 via the interface bus 113, then the input/output interface circuit 105 to the processing unit 102, which in turn utilizes a flash memory controller (not shown or embedded in the processing unit) to read from or write to the associated at least one memory device 103. In one embodiment, for further security protection, the processing unit 102 automatically initiates an operation of the data resetting mode upon detecting a predefined time period has elapsed since the last authorized access of the data stored in the memory device 103.
The optional power source 107 is mounted on the card body 100, and is connected to the processing unit 102 and other associated units on card body 100 for supplying electrical power (to all card functions) thereto. The optional function key set 108, which is also mounted on the card body 100, is connected to the processing unit 102, and is operable so as to initiate operation of processing unit 102 in a selected one of the programming, data retrieving and data resetting modes. The function key set 108 may be operable to provide an input password to the processing unit 102. The processing unit 102 compares the input password with the reference password stored in the memory device 103, and initiates authorized operation of the first flash memory device 100 upon verifying that the input password corresponds with the reference password. The optional display unit 106 is mounted on the card body 100, and is connected to and controlled by the processing unit 102 for displaying data exchanged with the motherboard 109.
A second flash memory device (without fingerprint verification capability) is shown in
Another flash memory module 171 is shown in
Referring now to
Each of the at least one intelligent NVM device 237 includes a control interface (CTL IF) 238 and a NVM 239. The control interface 238 is configured for communicating with corresponding intelligent NVM device controller 231 via NVM interface 235 for logical addresses, commands, data and timing signals. The control interface 238 is also configured for extracting a logical block address (LBA) from each of the received logical addresses such that corresponding physical block address (PBA) is determined within the intelligent NVM device 237. Furthermore, the control interface 238 is configured for managing wear leveling (WL) of NVM 239 locally with a local WL controller 219. The local WL controller 219 may be implemented in software (i.e., firmware) and/or hardware. Each local WL controller 219 is configured to ensure usage of physical non-volatile memory of respective NVM device 237 is as even as possible. The local WL controller 219 operates on physical block addresses of each respective NVM device. Additionally, the control interface 238 is also configured for managing bad block (BB) relocation to make sure each of the physical NVM devices 237 will have a even wear level count to maximize usage. Moreover, the control interface 238 is also configured for handing Error Code Correction (ECC) of corrupted data bits occurred during NVM read/write operations hence further ensuring reliability of the NVMD devices 237. NVM 239 may include, but not necessarily limited to, single-level cell flash memory (SLC), multi-level cell flash memory (MLC), phase-change memory (PCM), Magnetoresistive random access memory, Ferroelectric random access memory, Nano random access memory. For PCM, local wear level controller 219 does not need to manage wear level but other functions such as ECC and bad block relocation instead.
Each of the at least one intelligent NVM controller 231 includes a controller logic 232 and a channel interface 233. Intelligent NVM controllers 231 are coupled to the internal bus 230 in parallel. The volatile memory buffer 220, also coupled to the internal bus, may comprise synchronous dynamic random access memory (SDRAM). Data transfer between the volatile memory buffer 220 and the non-volatile memory device 237 can be performed via direction memory access (DMA) via the internal bus 230 and the intelligent NVM device controller 231. Volatile memory buffer 220 is controlled by volatile memory buffer controller 222. PLL circuit 228 is configured for generating a timing signal for the volatile memory buffer 220 (e.g., a SDRAM clock). The hub timing controller 224 together with data dispatcher 215 is configured for dispatching commands and sending relevant timing signals to the at least one intelligent NVM device controller 231 to enable parallel data transfer operations. For example, parallel advanced technology attachment (PATA) signals may be sent over the internal bus 230 to different ones of the intelligent NVM device controllers 231. One NVM device controller 231 can process one of the PATA requests, while another NVM device processes another PATA request. Thus multiple intelligent NVM devices 237 are accessed in parallel.
CPU 226 is configured for controlling overall data transfer operations of the first NVM based storage system 210a. The local memory buffer 227 (e.g., static random access memory) may be configured as data and/or address buffer to enable faster CPU execution. The storage protocol interface bridge 214 is configured for sending and/or receiving commands, addresses and data from a host computer via an external storage interface bus 213 (e.g., interface bus 113 of
Finally, in order to increase security of data stored in the storage system 210a, stored data may be encrypted using a data encryption/decryption engine 223. In one embodiment, the data encryption/decryption engine 223 is implemented basing on Advanced Encryption Standard (AES), for example, a 128-bit AES.
Details of synchronous DDR interlock signals 236A are shown in
The chip selection control 241 is configured for generating chip enable signals (e.g., CE0#, CE1#, etc.), each enables a particular chip that the DDR channel controller 234 controls. For example, multiple NVM devices controlled by the DDR channel controller 234 include a plurality of NVM chips or integrated circuits. The DDR channel controller 234 activates a particular one of them at any one time. The read/write command register 242 is configured for generating read or write signal to control either a read or write data transfer operation. The address register 243 comprises a row and column address. The command/address timing generator 244 is configured for generating address latch enable (ALE) and command latch enable (CLE) signals. The clock control circuit 245 is configured to generating a main clock signal (CLK) for the entire DDR channel controller 234. The sector input buffer 251 and the sector output buffer 252 are configured to hold data to be transmitted in and out of the DDR channel controller 234. DQS generator 254 is configured to generating timing signals such that data input and output are latched at a different faster data rate than the main clock cycles. The read FIFO 246 and write FIFO 247 are buffers configured in conjunction with the sector input/output buffer. The driver 249 and the receiver 250 are configured to send and to receive data, respectively.
Referring now to
Process 285 starts at an ‘IDLE’ state until the data encryption/decryption engine 223 receives plain text data (i.e., unencrypted data) at step 286. Next, at step 287, process 285 groups received data into 128-bit blocks (i.e., states) with each block containing sixteen bytes or sixteen (16) 8-bit data arranged in a 4×4 matrix (i.e., 4 rows and 4 columns of 8-bit data). Data padding is used for ensuring a full 128-bit data. At step 288, a cipher key is generated from a password (e.g., user entered password).
At step 289, a counter (i.e., Round count) is set to one. At step 289, process 285 performs an ‘AddRoundKey’ operation, in which each byte of the state is combined with the round key. Each round key is is derived from the cipher key using the key schedule (e.g., Rjindael's key schedule). Next, at step 291, process 285 performs a ‘SubBytes’ operation (i.e., a non-linear substitution step), in which each byte is replaced with another according to a lookup table (i.e., the Rijndael S-box). The S-box is derived from the multiplicative inverse over Galois Field GF(28). To avoid attacks based on simple algebraic properties, the S-box is constructed by combining the inverse function with an invertible affine transformation. The S-box is also chosen to avoid any fixed point (and so is a derangement), and also any opposite fixed points.
At step 292, next operation performed by process 285 is called ‘ShiftRows’. This is a transposition step where each row of the state is shifted cyclically a certain number of steps. For AES, the first row is left unchanged. Each byte of the second row is shifted one to the left. Similarly, the third and fourth rows are shifted by offsets of two and three respectively. For the block of size 128 bits and 192 bits the shifting pattern is the same. In this way, each column of the output state of the ShiftRows step is composed of bytes from each column of the input state. (Rijndael variants with a larger block size have slightly different offsets). In the case of the 256-bit block, the first row is unchanged and the shifting for second, third and fourth row is 1 byte, 2 bytes and 3 bytes respectively—although this change only applies for the Rijndael cipher when used with a 256-bit block, which is not used for AES.
Process 285 then moves to decision 293, it is determined if the counter has reached ten (10). If ‘no’, process 285 performs ‘MixColumns’ operation at step 294. This step is a mixing operation which operates on the columns of the state, combining the four bytes in each column. The four bytes of each column of the state are combined using an invertible linear transformation. The MixColumns function takes four bytes as input and outputs four bytes, where each input byte affects all four output bytes. Together with ShiftRows, MixColumns provides diffusion in the cipher. Each column is treated as a polynomial over GF(28) and is then multiplied modulo x4+1 with a fixed polynomial c(x)=3x3+x2+x+2. The MixColumns step can also be viewed as a multiplication by a particular maximum distance separable (MDS) matrix in Rijndael's finite field. The counter is then incremented by one (1) at step 295 before moving back to step 290 for another round.
When the counter ‘Round count’ is determined to be 10 at decision 293, process 285 sends out the encrypted data (i.e., cipher text) before going back to the ‘IDLE’ state for more data. It is possible to speed up execution of the process 285 by combining ‘SubBytes’ and ‘ShiftRows’ with ‘MixColumns’, and transforming them into a sequence of table lookups.
Hub timing controller 316 activates the storage system 300. Data is buffered across storage protocol bridge 321 from the host to NVM devices 337. Internal bus 325 allows data to flow among storage protocol bridge 321 and SSD downstream interfaces 328. The host and the endpoint may operate at the same speed (e.g., USB low speed (LS), full speed (FS), or high-speed (HS)), or at different speeds. Buffers in storage protocol bridge 321 can store the data. Storage packet preprocessor 323 is configured to process the received data packets.
When operating in single-endpoint mode, transaction manager 322 not only buffers data using storage protocol bridge 321, but can also re-order packets for transactions from the host. A transaction may have several packets, such as an initial token packet to start a memory read, a data packet from the memory device back to the host, and a handshake packet to end the transaction. Rather than have all packets for a first transaction complete before the next transaction begins, packets for the next transaction can be re-ordered by the storage system 300 and sent to the memory devices before completion of the first transaction. This allows more time for memory access to occur for the next transaction. Transactions are thus overlapped by re-ordering packets.
Transaction manager 322 may overlap and interleave transactions to different flash storage blocks, allowing for improved data throughput. For example, packets for several incoming transactions from the host are stored in storage protocol bridge 321 or associated buffer (not shown). Transaction manager 322 examines these buffered transactions and packets and re-orders the packets before sending them over internal bus 325 to the NVM devices 337.
A packet to begin a memory read of a flash block through a first downstream interface 328a may be reordered ahead of a packet ending a read of another flash block through a second downstream interface 328b to allow access to begin earlier for the second flash block.
Logical address space (LAS) 500 in a host computer is shown in the left column of
Physical address space (PAS) 540 in a non-volatile memory device is shown in the right column of
Volatile memory buffer 520 is partitioned into two portions: page buffers 521 and command (CMD) queue 530. The page buffers 521 are configured for holding data to be transmitted between the host and the NVM device, while the command queue 530 is configured to store received commands from the host computer. The size of a page buffer is configured to match page size of physical NVM, for example, 2,048-byte for MLC flash. In addition, each page would require additional bytes for error correction code (ECC). The command area 530 is configured to hold N commands, where N is a whole number (e.g., positive integer). The command queue 530 is so sized that stored commands and associated data can be flushed or dumped to the reserved area 566 using reserved electric energy stored in a designated capacitor of the NVM based storage system. In a normal data transfer operation, data stored into NVM device must be mapped from LAS 500 to PAS 540. However, in an emergency situation, such as upon detecting an unexpected power failure, data transfer (i.e., flushing or dumping data from volatile memory buffer to reserved area) is performed without any address mapping or translation. Goal is to capture perishable data from volatile memory into non-volatile memory so that data can be later recovered.
One advantage of using volatile memory buffer is to allow data write commands with overlapped target addresses to be merged before writing to the NVMD. Merging write commands can eliminate repeated data programming to same area of the NVM thereby increasing endurance of the NVMD.
Shown in
Shown in the bottom row of
Referring now to
Starting at an ‘IDLE’ state until the NVM based storage system 210b has received data transfer command from a host computer via a storage interface at step 702. Next, at decision 704, it is determined whether the received command is a data write command, if ‘yes’, the storage system 210b extracts logical address (e.g., LBA) from the received command at step 706. Then, process 700 moves to decision 708, it is determined whether the logical address is located in the system area. If ‘yes’, system files (e.g., MBR, FAT, Initial program loader, etc.) are saved to the NVM device right away at step 710 and process 700 goes back to the ‘IDLE’ state for another command.
If ‘no’, process 700 moves to decision 712. It is determined whether data transfer range in the received command is fresh or new in the volatile memory buffer. If ‘no’, existing data at overlapped addresses in the page buffers is overwritten with the new data at step 714. Otherwise, data is written into appropriate empty page buffers at step 716. After the data write command has been stored in the command queue with data stored in the page buffers, an ‘end-of-transfer’ signal is sent back to the host computer at 718. Process 700 moves back to the ‘IDLE’ state thereafter.
Referring back to decision 704, if ‘no’, process 700 moves to step 722 by extracting logical address from the received data read command. Next, at decision 724, it is determined whether data transfer range exists in the volatile memory buffer. If ‘no’, process 700 triggers NVM read cycles to retrieve requested data from NVM device at step 726. Otherwise, requested data can be fetched directly from the volatile memory buffer without accessing the NVM device at step 728. Next, at step 730, requested data are filled into the page buffers before notifying the host computer at step 730. Finally, process 700 moves back to the ‘IDLE’ state for another data transfer command. It is noted that the data transfer range is determined by the start address and the number of data sectors to be transferred in each command.
Referring back to decision 806, if ‘no’, data range is extracted from received command at step 818. Next, process 800 moves to decision 820 to determine whether the data range exists in the volatile memory buffer. If ‘no’, the process 800 fetches requested data from the NVM device at step 824, otherwise the data is fetched from the volatile memory buffer at step 822. Process 800 ends thereafter.
In the first waveform diagram of
The second waveform diagram of
Some portions of the preceding detailed descriptions have been presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the ways used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be borne in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “processing” or “computing” or “calculating” or “determining” or “displaying” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Embodiments of the present invention also relate to an apparatus for performing the operations herein. This apparatus may be specially constructed for the required purposes, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable medium. A machine-readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine-readable (e.g., computer-readable) medium includes a machine (e.g., a computer) readable storage medium (e.g., read only memory (“ROM”), random access memory (“RAM”), magnetic disk storage media, optical storage media, flash memory devices, etc.), etc.
The algorithms and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct more specialized apparatus to perform the required method operations. The required structure for a variety of these systems will appear from the description below. In addition, embodiments of the present invention are not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of embodiments of the invention as described herein.
Although the present invention has been described with reference to specific embodiments thereof, these embodiments are merely illustrative, and not restrictive of, the present invention. Various modifications or changes to the specifically disclosed exemplary embodiments will be suggested to persons skilled in the art. For example, whereas DDR SDRAM has been shown and described to be used in volatile memory buffer, other volatile memories suitable to achieve the same functionality may be used, for example, SDRAM, DDR2, DDR3, DDR4, Dynamic RAM, Static RAM. Additionally, whereas external storage interface has been described and shown as PCI-E, other equivalent interfaces may be used, for example, Advance Technology Attachment (ATA), Serial ATA (SATA), Small Computer System Interface (SCSI), Universal Serial Bus (USB), ExpressCard, fiber channel Interface, optical connection interface circuit, etc. Furthermore, whereas data security feature has been shown and described using a 128-bit AES, other equivalent or more secured standards may be used, for example, 256-bit AES. Finally, the NVM device has been shown and described to comprise two or four device, other numbers of NVM may be used, for example, 8, 16, 32 or any higher numbers that can be managed by embodiments of the present invention. In summary, the scope of the invention should not be restricted to the specific exemplary embodiments disclosed herein, and all modifications that are readily suggested to those of ordinary skill in the art should be included within the spirit and purview of this application and scope of the appended claims.
This application is a continuation-in-part (CIP) of U.S. patent application for “High Integration of Intelligent Non-Volatile Memory Devices”, Ser. No. 12/054,310, filed Mar. 24, 2008, which is a CIP of “High Endurance Non-Volatile Memory Devices”, Ser. No. 12/035,398, filed Feb. 21, 2008, which is a CIP of “High Speed Controller for Phase Change Memory Peripheral Devices”, U.S. application Ser. No. 11/770,642, filed on Jun. 28, 2007, which is a CIP of “Local Bank Write Buffers for Acceleration a Phase Change Memory”, U.S. application Ser. No. 11/748,595, filed May 15, 2007, which is CIP of “Flash Memory System with a High Speed Flash Controller”, application Ser. No. 10/818,653, filed Apr. 5, 2004, now U.S. Pat. No. 7,243,185. This application is also a CIP of U.S. patent application for “Intelligent Solid-State Non-Volatile Memory Device (NVMD) System with Multi-Level Caching of Multiple Channels”, Ser. No. 12/115,128, filed on May 5, 2008. This application is also a CIP of U.S. patent application for “High Performance Flash Memory Devices”, Ser. No. 12/017,249, filed on Feb. 27, 2008. This application is also a CIP of U.S. patent application for “Method and Systems of Managing Memory Addresses in a Large Capacity Multi-Level Cell (MLC) based Memory Device”, Ser. No. 12/025,706, filed on Feb. 4, 2008, which is a CIP application of “Flash Module with Plane-interleaved Sequential Writes to Restricted-Write Flash Chips”, Ser. No. 11/871,011, filed Oct. 11, 2007. This application is also a CIP of U.S. patent application for “Single-Chip Multi-Media Card/Secure Digital controller Reading Power-on Boot Code from Integrated Flash Memory for User Storage”, Ser. No. 12/128,916, filed on May 29, 2008, which is a continuation of U.S. patent application for the same title, Ser. No. 11/309,594, filed on Aug. 28, 2006, now issued as U.S. Pat. No. 7,383,362 on Jun. 3, 2008, which is a CIP of U.S. patent application for “Single-Chip USB Controller Reading Power-On Boot Code from Integrated Flash Memory for User Storage”, Ser. No. 10/707,277, filed on Dec. 2, 2003, now issued as U.S. Pat. No. 7,103,684. This application is also a CIP of U.S. patent application for “Electronic Data Flash Card with Fingerprint Verification Capability”, Ser. No. 11/458,987, filed Jul. 20, 2006, which is a CIP of U.S. patent application for “Highly Integrated Mass Storage Device with an Intelligent Flash Controller”, Ser. No. 10/761,853, filed Jan. 20, 2004, now abandoned. This application is also a CIP of U.S. patent application for “flash memory devices with security features”, Ser. No. 12/099,421, filed on Apr. 8, 2008. This application is also a CIP of U.S. patent application for “Electronic Data Storage Medium with Fingerprint Verification Capability”, Ser. No. 11/624,667, filed on Jan. 18, 2007, which is a divisional of U.S. patent application Ser. No. 09/478,720, filed on Jan. 6, 2000, now U.S. Pat. No. 7,257,714 issued on Aug. 14, 2007. This application may be related to a U.S. Pat. No. 7,073,010 for “USB Smart Switch with Packet Re-Ordering for Interleaving among Multiple Flash-Memory Endpoints Aggregated as a Single Virtual USB Endpoint” issued on Jul. 4, 2006.
Number | Date | Country | |
---|---|---|---|
Parent | 09478720 | Jan 2000 | US |
Child | 11624667 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11309594 | Aug 2006 | US |
Child | 12128916 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 12054310 | Mar 2008 | US |
Child | 12141879 | US | |
Parent | 12035398 | Feb 2008 | US |
Child | 12054310 | US | |
Parent | 11770642 | Jun 2007 | US |
Child | 12035398 | US | |
Parent | 11748595 | May 2007 | US |
Child | 11770642 | US | |
Parent | 10818653 | Apr 2004 | US |
Child | 11748595 | US | |
Parent | 12115128 | May 2008 | US |
Child | 10818653 | US | |
Parent | 12017249 | Jan 2008 | US |
Child | 12115128 | US | |
Parent | 12025706 | Feb 2008 | US |
Child | 12017249 | US | |
Parent | 11871011 | Oct 2007 | US |
Child | 12025706 | US | |
Parent | 12128916 | May 2008 | US |
Child | 11871011 | US | |
Parent | 10707277 | Dec 2003 | US |
Child | 11309594 | US | |
Parent | 11458987 | Jul 2006 | US |
Child | 10707277 | US | |
Parent | 09478720 | Jan 2000 | US |
Child | 11458987 | US | |
Parent | 10761853 | Jan 2004 | US |
Child | 09478720 | US | |
Parent | 12099421 | Apr 2008 | US |
Child | 10761853 | US | |
Parent | 11624667 | Jan 2007 | US |
Child | 12099421 | US |