This application is a co-pending application of 11/746,576 and 11/746,582, both of which are entitled “Secure And Scalable Solid State Disk System” and are filed on even-date herewith. All of which is incorporated herein by reference.
The present invention relates generally to memory systems and more specifically to a secure and scalable solid state disk system.
Flash based solid state disk (SSD) has slowly gained momentum and acceptance from industrial application, defense application, corporate application to server application and general user application. The major driving force behind the transition is due to advances in flash technology development and the intrinsic benefits from the flash components. The advantages of flash based SSD over tradition hard disk drive (HDD) are:
1. Lower power consumption.
2. Lighter weight.
3. Lower heat dissipation.
4. No noise.
5. No mechanical parts.
But SSD has its disadvantages that have been the hurdles for replacing HDD:
1. Higher cost.
2. Lower density.
3. Lower performance.
Further, a conventional SSD tends to manage a group of flash memory, in the order of 4, 8, 16, 32 or more components. It presents a great design challenge in the areas:
1. Pin-outs to manage too many flash device interfaces.
2. Wear-leveling across too many flash components.
3. Manufacturability and testability on SSD system.
4. Time lag in supporting and taking advantage of new flash technology.
5. Time to market.
6. Cost saving from new flash technology.
Traditional HDD comes without security built-in. If a host system with a HDD is stolen, the content of the HDD can easily be accessed and misappropriated. Even though there is a software solution to provide the whole disk encryption, it suffers several problems in real life application:
1. Performance penalty due to software encryption and decryption.
2. Additional driver installation required.
3. Still leaving room for attack if the password authentication utility is resided in the HDD.
If SSD is to become mainstreamed to transition from a niche product to a more general user application, it has to address the hurdles mentioned above, in addition to adding values such as security, scalability and others.
A conventional Secure Digital (SD) flash card block diagram is shown in
In a conventional storage system, such as the ones described in U.S. patent Ser. No. 10/707,871 (20050005044), U.S. Ser. No. 10/709,718 (20050005063), U.S. Pat. No. 6,098,119, and U.S. Pat. No. 6,883,083, U.S. Pat. No. 6,877,044, U.S. Pat. No. 6,421,760, U.S. Pat. No. 6,138,176, U.S. Pat. No. 6,134,630, U.S. Pat. No. 6,549,981 and published application no. US 20030120865 a storage controller automatically configures disk drives at system boot-up or at runtime. It performs the basic storage identification and aggregation functionality. The prior art invention is best at detecting the drive insertion and removal during runtime. But it fails to recognize the asynchronous nature between the host system and the storage system during boot-up time. Since the storage controller functions as a virtualization controller, it takes time to identify, test and to configure the physical drives during host system boot-up. If there is not a mechanism to re-synchronize the host system and the storage system, the host system will simply time-out and fail to recognize and configure the virtual logical storage. As such, the conventional systems at best serve only as a secondary storage system, instead of a primary storage system. Another weakness of U.S. Pat. No. 6,098,119 is that the system requires each physical drive to have one or more preloaded “parameter settings” during initialization. It poses the limitation in auto-configuration.
Most of the conventional systems do not address the storage expandability and scalability either. Even though U.S. patent application Ser. No. 10/707,871 (20050005044) and U.S. patent application Ser. No. 10/709,718 (20050005063) do address the storage virtualization computer system with scalability, its focus is on the “external” storage virtualization controller coupling to a host entity that can be a host computer or a server. It fails to address the virtual storage boot-up problem mentioned above. It is still at best serving as a secondary storage based on its storage virtualization architecture.
Further, conventional systems fail to address the drive security in password authentication and hardware encryption that is vital in notebook computer primary drive application.
As in U.S. Pat. No. 7,003,623 as shown in
Each flash memory 13 has a total of about 15 to 23 signal pins to interface with the controller 25. The SATA host interface 251 requires 4 signal pins to interface with the SATA host controller 21. The SATA to flash memory controller 25 would require a total of at least 124 signal pins to manage 8 flash memory 13; or a total of 244 signal pins to manage 16 flash memory 13.
As is seen in
Accordingly, what is desired is a system and method that addresses the above-identified issues. The present invention addresses such a need.
A solid state disk system is disclosed. The system comprises a user token and a first level secure virtual storage controller, coupled to the host system. The system also includes a plurality of second level secure virtual storage controllers having an interface with and being compatible to the first level secure virtual storage controller and a plurality of third level secure virtual storage devices coupled to the plurality of second level secure virtual storage controllers.
A system and method in accordance with the present invention provides the following advantages.
1. The system and method introduces a secure virtual storage controller architecture.
2. The system and method introduces a scalable SSD system, based on the secure virtual storage controller architecture.
3. The system and method bases the building blocks on the most prevalent and popular flash card/drive to tap into the latest flash component technology in cost, density and performance.
4. The system and method uses the virtual storage processor to aggregate the density and performance.
5. The system and method uses more layers of virtual storage controller, if necessary, to expand the density and performance.
6. The system and method uses the crypto-engine in the virtual storage controller, if necessary, to conduct encryption/decryption on-the-fly between the upstream and downstream data traffic between the host and device.
7. The system and method utilizes a USB token for independent password authentication on SSD.
8. The system and method allows secure-and-scalable solid state disk (SNS-SSD) to replace HDD with transparent user experience, from booting up, hibernation to general usage.
A system and method in accordance with the present invention could be utilized in flash based storage, disk storage systems, portable storage devices, corporate storage systems, PCs, servers, wireless storage, and multimedia storage systems.
The present invention relates generally to memory systems and more specifically to a secure and scalable solid state disk system. The following description is presented to enable one of ordinary skill in the art to make and use the invention and is provided in the context of a patent application and its requirements. Various modifications to the preferred embodiments and the generic principles and features described herein will be readily apparent to those skilled in the art. Thus, the present invention is not intended to be limited to the embodiments shown, but is to be accorded the widest scope consistent with the principles and features described herein.
A USB token 35 serves as an independent agent to provide password authentication utility before the SNS-SSD 31 can be accessed after host system 30 boots up. The utility can be a software utility residing on the USB token 35 or preferably a browser link to the web server on the USB token 35. The browser link is preferable, as it is more universal and requires less system resources to work on cross platform devices.
The secure-and-scalable solid state disk (SNS-SSD) system 31 comprises a first-level secure virtual storage controller 32 and two second-level secure virtual storage controllers 33, and eight third-level storage device SD cards 10.
The first level of the secure virtual storage controller 32 comprises a SATA host interface 321, a crypto-engine 323 and a multiple of SATA device interfaces 322. The host side storage interface in this case is a serial ATA or SATA. The storage host interface can be any type of IO interface including SATA, Serial Attached SCSI (SAS), PCI Express, PATA, USB, Bluetooth, UWB or wireless interface. A more detailed description of the virtual storage controller 32 is shown in secure virtual storage controller 40 in
The second-level of the virtual storage controller 33 comprises a SATA host interface 331, a crypto-engine 333 and a multiple of SD device interfaces 332. Instead of interfacing directly with the flash memory, the virtual storage controller 33 chooses to interface with the third level storage device, a SD card 10. The SD card 10 can be replaced with any flash based card or drive, including CF card, MMC, USB drive or Memory Stick, as long as pin-count, cost, and performance justify. In this case, each SD card 10 has six signal pins. It requires a total of 24 signal pins for four SD components with two flash memory components on each SD card, instead of 120 signal pins for eight flash memory components in the conventional approach. It amounts to a great cost saving in controller chip fabrication and a better manufacturability and testability.
Even though the first-level secure virtual storage controller 32 and the second-level secure virtual storage controller 33 may have different type of device interfaces, their architectures are substantially identical. As long as the storage device interface 322 is compatible with the storage host interface 331, first-level secure virtual storage controller 32 can be cascaded and expanded with the second-level secure virtual storage controller 33. The expansion is therefore exponential in density and performance. In its simplest form of architecture of secure-and-scalable solid state disk (SNS-SSD) system, the host system 30 can interface directly with one of the second level virtual storage controllers 33. The minimal secure-and-scalable solid state disk (SNS-SSD) system is therefore with a total two levels comprising the second level storage controller 33 and the third level storage devices 10.
The crypto-engine 323 in the first-level and crypto-engine 333 in the second-level can be enabled, disabled and configured independently, depending on the requirement. In most cases, only the top-level crypto-engine is required. All other crypto-engines in the subsequent levels are disabled. A more detailed description of the crypto-engine is shown in
On the host storage interface, a SATA host interface 331 is used to interface with the first level of virtual storage controller 32. The storage interface in this case is a serial ATA or SATA. A more detailed description of the virtual storage controller 33 is shown in secure virtual storage controller 40 in
As shown in
The virtual storage controller architecture in the invention is cascadable and scalable as long as the storage interface is compatible. If more density is required, more second level virtual storage controllers can be added for expansion. Accordingly, more third level storage devices can be added for density expansion. Compared with the conventional approach, the secure-and-scalable solid state disk (SNS-SSD) system offers better storage density expansion in exponential order. By using the standard flash card such as SD card 10 as the flash memory building block, it brings along several benefits compared with the conventional SSD approach.
By using the standard flash card such as SD card 10 as the flash memory building block, it brings along several benefits compared with the conventional SSD approach:
1. Wear-leveling of flash memory is delegated locally to the SD card 10. No grand scale wear-leveling across all flash components is required.
2. Manufacturability and testability are done at the storage device level on SD card. It is more manageable at the device level than at SSD system level.
3. There is no time lag in supporting and taking advantage of new flash technology, as the design and development is delegated to the standard SD controller 12 inside SD card 10.
4. Time to market is much shorter. As soon as the SD card 10 is available in cost, density and performance, the secure-and-scalable solid state disk (SNS-SSD) system 31 can be deployed.
5. Cost saving from new flash technology again is brought along by the building block architecture of SD card 10.
6. The performance benefit is from the virtual storage processor 32 and 33. It not only provides virtual storage density aggregation, but also provides on-demand performance aggregation. The theoretical performance can be as high as the number of SD cards times the native SD card performance in parallel operation.
7. The security is handled by the hardware based crypto-engine 323 or 333. The password authentication utility resides independently on a USB token 35. The secure-and-scalable solid state disk (SNS-SSD) system has better performance and is more secure.
The storage host interface 41 is for interfacing with the upstream host system 30 or another upper-level of secure virtual storage controller. The storage device interface 408 is for interfacing with the downstream storage device 10 or another lower-level of secure virtual storage controller.
Another embodiment of the block diagram of the invention, secure-and-scalable solid state disk (SNS-SSD) system 39 with PATA interface, is shown in
The secure-and-scalable solid state disk (SNS-SSD) system 39 with PATA interface comprises a first-level secure virtual storage controller 38, a second-level secure virtual storage controllers 32, and two third-level secure virtual storage controllers 33, and eight fourth-level storage device SD cards 10. As described above, the architecture of the invention is expandable and cascadable in density and performance.
As in
The DATA write processor 401 interfaces with the virtual storage processor 407 through the crypto-engine that is doing the hardware encryption on-the-fly. The data is transferred from the buffer, encrypted and passed to virtual storage processor 407.
The DATA read processor 402 interfaces with the virtual storage processor 407 through the crypto-engine that is doing the hardware decryption on-the-fly. The data is transferred from virtual storage processor 407, decrypted and passed to the buffer.
The pass-through command processor 403 handles those commands that do not require any local processing. The pass-through command is sent directly downstream without encryption or translation.
The get status and attribute processor 404 returns proper status and/or attributes back to the upstream host system or the upper-level virtual storage controller. If the status or attribute require too much time for the local controller to return, it will normally assert busy status to the requesting upstream host system or the upper-level virtual storage controller. When the proper status or attribute is collected, the interrupt processor 42 and routine 70 are invoked. The interrupt processor 42 generates a soft reset 47 to CPU 44 to warm boot the secure virtual storage controller 40. Consequentially, it interrupts the upstream system for service to interrogate the secure virtual storage controller 40 again, and the correct status or attribute is returned. It is a mechanism to synchronize the host and device when they are running at different pace, and the device needs more time to settle after request.
Every secure virtual storage controller 40 can be identified with a unique ID preprogrammed in the program memory 45.
It concludes the initialization of the secure virtual storage controller 40.
The host command and data processor 43 queues up and buffers packet of command and data between the storage host interface 41 and crypto-engine 406. The extracted command queue is turned over to host command processor routine 80 to process, in
The local command processor 405 deals with the local functions of crypto-engine 406, virtual storage processor 407 and the local virtual storage controller 40. As shown in
User provision command 91 is for use by the utility in the field application, including the password authentication utility in USB token 35. It includes password utility commands 94 and storage partition commands 95. Factory provision command 93 is for use in the factory to configure the SSD. It includes virtual storage processor configuration 96, crypto-engine configuration 97, password attribute configuration 98, and test-mode command 99. Get local status command 92 is to return the corresponding status on the virtual storage controller.
Get virtual storage controller ID command 961 is to return the unique ID stored in the program memory 45. Set virtual storage mode command 962 is to set the storage operation mode of JBOD (Just a Bunch of Disks), RAID (Redundant Arrays of Independent Disks) or others, depending on the requirement of performance or power consumption. Set crypto-mode command 971 is to set the encryption mode of the engine. Enable crypto-engine command 972 is to enable the crypto-engine. Set Managed Mode flag 983 is to allow or disallow provision of SSD in the field. If the flag is set as Unmanaged Mode, then the USB token is what is needed to do re-provision and initialization of the SSD. If the flag is set as Managed Mode, then the user has to connect back to the managing server while doing the re-provision and initialization of the SSD. The flag can only be set in the factory. Test-mode command 99 is reserved for testing of SSD by the manufacturer.
Before the SSD is ready for use, it has to go through factory provision during the manufacturing process. The provision is done by connecting the secure-and-scalable solid state disk (SNS-SSD) system 31 to a host system 30 with a proper SATA host controller 34 and possibly with a USB token 35, as shown in
The host system 30 depends on the plugged in USB token 35 to conduct password authentication. Referring to
Referring to
Referring to
Referring to
If the init and partition request is generated by the user through command 947, via step 154. Accordingly, the crypto-engine will get a new random key from the random number generator 134 (not shown). It is checked if the Managed Mode flag is on, via step 1541. If not, the encrypted key is retrieved, via step 1543, from the USB token 35. Otherwise, the encrypted key is retrieved from the managing server, via step 1542. The encrypted key is sent to the crypto-engine through set encrypted key command 9471, via step 1544. The crypto-engine then decrypts and retrieves the key (not shown). The encrypted master password is retrieved and decrypted by the crypto-engine (not shown). A new random key is then generated from the random number generator RNG 134 (not shown). The master password will be encrypted with the new key by the crypto-engine (not shown). The utility will then initiate a get encrypted new key command 9472, via step 1545. The encrypted new key is stored in the managing server or USB token 35, if necessary via step 1546 and 1547. The new user password is then requested from the user and configured, via step 1548. Both master and user password are hashed with the newly generated key through HASH function 131 and stored on the SSD (not shown). The SSD partition is then configured, via step 1549.
If the request is not for init and partition, it is checked if an authenticate password request is generated, via step 155. If so, password authentication starts, via step 1550. Otherwise, it is checked if a change password request is generated, via step 156. If so, change password utility starts, via step 157. Otherwise, it loops back to check for new password utility request, via step 154.
Although the secure and scalable solid state disk system in accordance with the present invention will function with any of a secure digital (SD) card, multimedia card (MMC), compact flash (CF) card, universal serial bus (USB) device, memory stick (MS), ExpressCard, LBA-NAND, ONFI, eMMC, and eSD; one of ordinary skill in the art readily recognizes that the disk system would function with other similar memory devices and still be within the spirit and scope of the present invention.
Although the present invention has been described in accordance with the embodiments shown, one of ordinary skill in the art will readily recognize that there could be variations to the embodiments and those variations would be within the spirit and scope of the present invention. Accordingly, many modifications may be made by one of ordinary skill in the art without departing from the spirit and scope of the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
5012514 | Renton | Apr 1991 | A |
5175766 | Hamilton | Dec 1992 | A |
5394532 | Belsan | Feb 1995 | A |
5442704 | Holtey | Aug 1995 | A |
5469564 | Junya | Nov 1995 | A |
5530845 | Hiatt et al. | Jun 1996 | A |
5758050 | Brady et al. | May 1998 | A |
5768373 | Lohstroh et al. | Jun 1998 | A |
5937066 | Gennaro et al. | Aug 1999 | A |
5999711 | Misra et al. | Dec 1999 | A |
6098119 | Surugucchi et al. | Aug 2000 | A |
6134630 | McDonald et al. | Oct 2000 | A |
6138176 | McDonald et al. | Oct 2000 | A |
6148387 | Galasso et al. | Nov 2000 | A |
6226732 | Pei et al. | May 2001 | B1 |
6311269 | Luckenbaugh et al. | Oct 2001 | B2 |
6324537 | Moran | Nov 2001 | B1 |
6408074 | Longhran | Jun 2002 | B1 |
6421760 | McDonald et al. | Jul 2002 | B1 |
6530078 | Shmid et al. | Mar 2003 | B1 |
6549981 | McDonald et al. | Apr 2003 | B2 |
6567889 | DeKoning et al. | May 2003 | B1 |
6751318 | Crandall | Jun 2004 | B2 |
6868160 | Raji | Mar 2005 | B1 |
6877044 | Lo et al. | Apr 2005 | B2 |
6880054 | Cheng et al. | Apr 2005 | B2 |
6883083 | Kemkar | Apr 2005 | B1 |
7003623 | Teng | Feb 2006 | B2 |
7039759 | Cheng et al. | May 2006 | B2 |
7043684 | Joly | May 2006 | B2 |
7047416 | Wheeler et al. | May 2006 | B2 |
7073010 | Chen et al. | Jul 2006 | B2 |
7089585 | Dharmarajan | Aug 2006 | B1 |
7096354 | Wheeler et al. | Aug 2006 | B2 |
7110982 | Feldman et al. | Sep 2006 | B2 |
7124203 | Joshi et al. | Oct 2006 | B2 |
7127606 | Wheeler et al. | Oct 2006 | B2 |
7133845 | Ginter et al. | Nov 2006 | B1 |
7200747 | Riedel et al. | Apr 2007 | B2 |
7269004 | Ni et al. | Sep 2007 | B1 |
7344072 | Gonzalez et al. | Mar 2008 | B2 |
7406617 | Athreya et al. | Jul 2008 | B1 |
7438234 | Bonalle et al. | Oct 2008 | B2 |
7454531 | Shih | Nov 2008 | B2 |
7506819 | Beenau et al. | Mar 2009 | B2 |
7591018 | Lee | Sep 2009 | B1 |
7664903 | Belonoznik | Feb 2010 | B2 |
7774525 | Farhan et al. | Aug 2010 | B2 |
20020080958 | Ober et al. | Jun 2002 | A1 |
20020103943 | Lo et al. | Aug 2002 | A1 |
20020108023 | Constable et al. | Aug 2002 | A1 |
20030014653 | Moller et al. | Jan 2003 | A1 |
20030018862 | Karnstedt et al. | Jan 2003 | A1 |
20030110351 | Blood et al. | Jun 2003 | A1 |
20030120865 | McDonald et al. | Jun 2003 | A1 |
20040093495 | Engel | May 2004 | A1 |
20040103288 | Ziv et al. | May 2004 | A1 |
20040123127 | Teicher et al. | Jun 2004 | A1 |
20040128468 | Lloyd-Jones | Jul 2004 | A1 |
20050005044 | Liu et al. | Jan 2005 | A1 |
20050005063 | Liu et al. | Jan 2005 | A1 |
20050005131 | Yoshida et al. | Jan 2005 | A1 |
20050033956 | Krempl | Feb 2005 | A1 |
20050097348 | Jakubowski et al. | May 2005 | A1 |
20050109841 | Ryan et al. | May 2005 | A1 |
20050114679 | Bagga et al. | May 2005 | A1 |
20050185463 | Kanamori et al. | Aug 2005 | A1 |
20050193162 | Chou et al. | Sep 2005 | A1 |
20050195975 | Kawakita | Sep 2005 | A1 |
20050220305 | Fujimoto et al. | Oct 2005 | A1 |
20050250473 | Brown et al. | Nov 2005 | A1 |
20050281088 | Ishidoshiro et al. | Dec 2005 | A1 |
20060015946 | Yagawa | Jan 2006 | A1 |
20060047794 | Jezierski | Mar 2006 | A1 |
20060072743 | Naslund et al. | Apr 2006 | A1 |
20060075485 | Funahashi et al. | Apr 2006 | A1 |
20060075488 | Barrett et al. | Apr 2006 | A1 |
20060101205 | Bruning et al. | May 2006 | A1 |
20060104441 | Johansson et al. | May 2006 | A1 |
20060131431 | Finn | Jun 2006 | A1 |
20060143422 | Mashima et al. | Jun 2006 | A1 |
20060208066 | Finn et al. | Sep 2006 | A1 |
20060219776 | Finn | Oct 2006 | A1 |
20060277411 | Reynolds et al. | Dec 2006 | A1 |
20070136606 | Mizuno | Jun 2007 | A1 |
20070214369 | Roberts et al. | Sep 2007 | A1 |
20080059730 | Cepulis | Mar 2008 | A1 |
20080109607 | Astigarraga et al. | May 2008 | A1 |
20080155276 | Chen | Jun 2008 | A1 |
20080279382 | Chen et al. | Nov 2008 | A1 |
20080282264 | Chen et al. | Nov 2008 | A1 |
Number | Date | Country |
---|---|---|
0161692 | Aug 2001 | WO |
2009110878 | Sep 2009 | WO |
Entry |
---|
Notification of Transmittal of the International Search Report and the Written Opinion of the International Searching Authority, or the Declaration, for International Application No. PCT/US08/58532, Int'l filing date Mar. 28, 2008, mailing date Aug. 29, 2008, 11 pages. |
Iiya Krutov, “Choosing eXFlash Storage on IBM eX5 Servers,” Dec. 9, 2011, IBM Redpaper, p. 1-32. |
Number | Date | Country | |
---|---|---|---|
20080282027 A1 | Nov 2008 | US |