Data storage device startup

Information

  • Patent Grant
  • 9323467
  • Patent Number
    9,323,467
  • Date Filed
    Friday, December 13, 2013
    11 years ago
  • Date Issued
    Tuesday, April 26, 2016
    8 years ago
Abstract
When a read command is received from a host requesting data stored on a disk of a Data Storage Device (DSD), it is determined whether the DSD is in a startup period and whether the requested data is stored in a solid state memory of the DSD. The requested data is designated for storage in the solid state memory if it is determined that the DSD is in the startup period and the requested data is not stored in the solid state memory.
Description
BACKGROUND

Data Storage Devices (DSDs) are often used to record data onto or to reproduce data from a storage media. Some DSDs include multiple types of storage media. In the case of a Solid State Hybrid Drive (SSHD), a solid state storage media such as a flash memory is used for storing data in addition to at least one rotating magnetic disk for storing data.


During startup of a computer system including a host and a DSD, the host typically accesses boot up data from the DSD such as certain Operating System (OS) data and BIOS data. This boot up data is often stored on a disk of the DSD which requires spinning the disk up to an operating speed to read the boot up data from the disk. In addition, spinning up the disk during the startup period can require additional power during the startup period which can be undesirable for a computer system relying on a battery power source.





BRIEF DESCRIPTION OF THE DRAWINGS

The features and advantages of the embodiments of the present disclosure will become more apparent from the detailed description set forth below when taken in conjunction with the drawings. The drawings and the associated descriptions are provided to illustrate embodiments of the disclosure and not to limit the scope of what is claimed.



FIG. 1 is a block diagram depicting a Data Storage Device (DSD) according to an embodiment.



FIG. 2 is a conceptual diagram illustrating a self learning list according to an embodiment.



FIG. 3 is a flowchart for a self learning process according to an embodiment.



FIG. 4 is a flowchart for a data eviction process according to an embodiment.



FIG. 5 is a flowchart for a media synchronization process according to an embodiment.





DETAILED DESCRIPTION

In the following detailed description, numerous specific details are set forth to provide a full understanding of the present disclosure. It will be apparent, however, to one of ordinary skill in the art that the various embodiments disclosed may be practiced without some of these specific details. In other instances, well-known structures and techniques have not been shown in detail to avoid unnecessarily obscuring the various embodiments.


Data Storage Device Overview


FIG. 1 shows computer system 100 according to an embodiment which includes host 101 and Data Storage Device (DSD) 106. Computer system 100 can be, for example, a computer system (e.g., desktop, mobile/laptop, tablet, smartphone, etc.) or other electronic device such as a digital video recorder (DVR). In this regard, computer system 100 may be a stand-alone system or part of a network.


In the example of FIG. 1, DSD 106 is a hybrid drive including two types of Non-Volatile Memory (NVM) media, i.e., rotating magnetic disks in disk pack 134 and solid state memory 128. While the description herein refers to solid state memory generally, it is understood that solid state memory may comprise one or more of various types of memory devices such as flash integrated circuits, Chalcogenide RAM (C-RAM), Phase Change Memory (PC-RAM or PRAM), Programmable Metallization Cell RAM (PMC-RAM or PMCm), Ovonic Unified Memory (OUM), Resistance RAM (RRAM), NAND memory (e.g., Single-Level Cell (SLC) memory, Multi-Level Cell (MLC) memory, or any combination thereof), NOR memory, EEPROM, Ferroelectric Memory (FeRAM), Magnetoresistive RAM (MRAM), other discrete NVM chips, or any combination thereof.


DSD 106 includes controller 120 which includes circuitry such as one or more processors for executing instructions and can include a microcontroller, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA), hard-wired logic, analog circuitry and/or a combination thereof. In one implementation, controller 120 can include a System on a Chip (SoC).


Host interface 126 is configured to interface DSD 106 with host 101 and may interface according to a standard such as, for example, PCI express (PCIe), Serial Advanced Technology Attachment (SATA), or Serial Attached SCSI (SAS). As will be appreciated by those of ordinary skill in the art, host interface 126 can be included as part of controller 120.


In the example of FIG. 1, disk pack 134 is rotated by Spindle Motor (SM) 138. DSD 106 also includes Head Stack Assembly (HSA) 136 connected to the distal end of actuator 130 which is rotated by Voice Coil Motor (VCM) 132 to position HSA 136 in relation to disk pack 134. Servo controller 122 includes circuitry to control the position of HSA 136 and the rotation of disk pack 134 using VCM control signal 30 and SM control signal 34, respectively.


Disk pack 134 comprises multiple disks that are radially aligned so as to rotate about SM 138. Each disk in disk pack 134 includes a number of radial spaced, concentric tracks for storing data on a disk surface. HSA 136 includes multiple heads each arranged to read data from and write data to a corresponding surface of a disk in disk pack 134. Read/write channel 124 includes circuitry for encoding data to be written to disk pack 134 and for decoding data read from disk pack 134. As will be appreciated by those of ordinary skill in the art, read/write channel 124 can be included as part of controller 120.


DSD 106 also includes solid state memory 128 for storing data. Solid state memory 128 stores Non-Volatile Cache (NVC) 18 where data can be retained across power cycles (i.e., after turning DSD 106 off and on). NVC 18 can be used to store data which may or may not also be stored in disk pack 134. Solid state memory 128 also stores self learning list 20. As discussed in more detail below with reference to FIG. 2, self learning list 20 can be used as part of a self learning process of DSD 106 to track data that is used across a plurality of startups of DSD 106.


Volatile memory 140 can include, for example, a Dynamic Random Access Memory (DRAM) which can be used by DSD 106 to temporarily store data. Data stored in volatile memory 140 can include data read from NVM (e.g., disk pack 134 or solid state memory 128), data to be written to NVM, instructions loaded from a firmware of DSD 106 for execution by controller 120, and/or data used in executing the firmware of DSD 106.


In operation, host interface 126 receives read and write commands from host 101 via host interface 126 for reading data from and writing data to NVM such as solid state memory 128 and disk pack 134. In response to a write command from host 101, controller 120 may buffer the data to be written for the write command in volatile memory 140.


For data to be written to disk pack 134, read/write channel 124 can encode the buffered data into write signal 32 which is provided to HSA 136 for magnetically writing data to a disk surface of disk pack 134.


In response to a read command for data stored on a disk surface of disk pack 134, controller 120 positions HSA 136 via servo controller 122 to magnetically read the data stored on a surface of disk pack 134. HSA 136 sends the read data as read signal 32 to read/write channel 124 for decoding and the data is buffered in volatile memory 140 for transferring to host 101.


The foregoing operation of disk pack 134 for servicing read and write commands generally requires more power than using solid state memory 128 since disk pack 134 needs to be physically spun up to an operating speed by SM 138 before reading or writing data on disk pack 134.


Accordingly, NVC 18 can store a copy of certain data stored on disk pack 134 to prevent disk pack 134 from having to spin up. Such data can include frequently accessed data or data used to boot up or startup computer system 100 or DSD 106. For example, to startup DSD 106 or computer system 100 without having to spin up disk pack 134, NVC 18 can include data such as a firmware for DSD 106, certain Operating System (OS) data, or BIOS boot data. Upon power up of DSD 106, controller 120 can load this data from NVC 18 and be ready to receive commands from host 101 without having to spin up disk pack 134. This arrangement ordinarily allows for a quicker ready time for computer system 100 and can allow DSD 106 to keep SM 138 powered down, in addition to other components used for the operation of disk pack 134 such as servo controller 122 and read/write channel 124. Reducing the power needed to startup DSD 106 can be especially beneficial when computer system 100 must rely on a battery with a low charge.


In some implementations, the “spin-less drive boot” generally described above and in more detail below can serve as part of a “High Spindle Suppression” (HSS) mode to reduce instances of rotation of SM 138 during the HSS mode. In such implementations, the DSD 106 is considered to be in the HSS mode during startup. Examples of an HSS mode can be found in co-pending application Ser. No. 14/105,603, entitled “Power Management for Data Storage Device”, filed on Dec. 13, 2013, which is hereby incorporated by reference in its entirety.


Example Self Learning List


FIG. 2 is a conceptual diagram illustrating self learning list 20 according to an embodiment where self learning list 20 is used to track boot up data used across a plurality of startup periods. As shown in FIG. 2, self learning list 20 includes self learning information labeled as SLI 1 to SLI N−1 between the start of the list at 202 and the end of the list at 204. In the embodiment of FIG. 2, self learning list 20 is a doubly linked list where each self learning information entry SLI 1 to SLI N−1 is associated with data read or written during a startup period of DSD 106.


Self learning list 20 can be part of a Least Recently Used (LRU) algorithm to keep a fixed amount of self learning information between the start of the list at 202 and the end of the list at 204. When read commands or write commands are received during a startup period of DSD 106, controller 120 inserts or reinserts self learning information associated with the read or write command at the start of the list at 202. The older self learning information in the list is then pushed down toward the end of the list at 204. In this way, the most recently used data is retained in the list while the least recently used data can be removed from the list, as is the case for SLI N in FIG. 2. Controller 120 may then mark the data corresponding to the removed self learning information as invalid in NVC 18, assuming a copy of the data already exists in disk pack 134. In other implementations, controller 120 may delete from NVC 18 the data corresponding to the removed self learning information. In addition, and as described below in more detail with reference to FIG. 3, data not stored in NVC 18 that has associated self learning information in self learning list 20 is designated for copying to NVC 18.


Each instance of self learning information is associated with data read or written during a startup period of DSD 106. As shown for SLI 3, the self learning information can include a host Logical Block Address (LBA), a block size, location information, and data state information. The host LBA indicates a logical address used by host 101 for the data associated with the read or write command. In the case of a read command, the host LBA is for data requested by the host. For a write command, the host LBA is for data to be written. The block size indicates a data capacity size that can be allocated for the data in solid state memory 128, such as 4 KB or 8 KB. The location information can indicate a location for the data in solid state memory 128, such as a block or page address. The data state information can indicate whether the data is currently stored in only disk pack 134, only in solid state memory 128, or stored in both disk pack 134 and solid state memory 128 (i.e., synced on both media). Other embodiments may only include some of the above examples of self learning information, such as only including a host LBA for data read or written during a startup period.


By maintaining self learning list 20, it is ordinarily possible to account for changes in the boot up data used over a plurality of startup periods. Such changes may result, for example, from updates to an OS or from other changes in computer system 100. Self learning list 20 can therefore allow the data associated with a startup period to evolve over time to more accurately predict the data that will be accessed during the next startup period.


Example Self Learning Process


FIG. 3 is a flowchart for a self learning process that can be performed by controller 120 according to an embodiment. The process begins in block 302 when DSD 106 receives a read or write command from host 101 via host interface 126.


In block 304, controller 120 determines whether DSD 106 is in a startup period. Controller 120 may make this determination based on an indication received from host 101. For example, controller 120 may check in block 304 whether host 101 has issued a particular command or query indicating that an OS executing on host 101 has finished booting. In other implementations, controller 120 may use the amount of data transferred between DSD 106 and host 101 since startup to determine whether DSD 106 is in the startup period. For example, controller 120 may determine that DSD 106 is in the startup period if less than 200 MB of data have been transferred between DSD 106 and host 101 since startup. The determination in block 304 may also be made based upon a predetermined amount of time such as 30 seconds such that controller 120 determines that DSD 106 is in the startup period if it has been less than 30 seconds since startup.


In some embodiments, controller 120 may use a combination of the above indicators in block 304 to determine whether DSD 106 is in a startup period. For example, controller 120 may determine that DSD 106 is in a startup period if any one of the above conditions occur, i.e., if either an indication has been received from host 101, a predetermined amount of data has been transferred, or a predetermined amount of time has elapsed.


If controller 120 determines in block 304 that DSD 106 is not in a startup period, the command received in block 302 is processed normally without self learning. On the other hand, if controller 120 determines in block 304 that DSD 106 is in a startup period, controller 120 determines in block 308 whether the command received in block 302 is a read command or a write command. If the command is not a read command (i.e., the command is a write command), controller 120 in block 310 designates the data associated with the write command for later storage in disk pack 134. This data may be referred to as “dirty data” which is data that needs to be synchronized with disk pack 134 since the data will only initially be stored in solid state memory 128 in block 312. The designation of the dirty data may be made by marking an LBA for the dirty data in a list of LBAs for data to be copied to disk pack 134. The designation of block 310 may also be made by using data state information of the self learning information discussed above to indicate that the designated data is only stored in solid state memory 128. The copying to disk pack 134 can be done, for example, as part of a background activity when DSD 106 is not servicing any host commands. Such copying may be performed as part of a synchronization process such as the synchronization process of FIG. 5 described below. Although the present embodiment provides for a later backup of boot data, other embodiments may omit block 310 where only one copy of the boot data is desired.


In block 312, the data for the write command is written to NVC 18 in solid state memory 128. As part of writing the data to solid state memory 128, controller 120 may first check that there is enough available storage capacity in solid state memory 128 to write the data. In some embodiments, if there is not enough storage capacity, controller 120 may instead write the data to disk pack 134.


As discussed above, the data written during a startup period can be stored in solid state memory 128 instead of disk pack 134 to improve the accessibility of data during a subsequent startup period since disk pack 134 will not need to be spun up to access the data. In addition, the power required to access the data written in block 312 during a startup period should be reduced since it will not be necessary to spin up disk pack 134 or power certain components of DSD 106 for operation of disk pack 134.


In block 324, controller 120 updates self learning list 20 to insert or reinsert self learning information at the start of the list for the data written in block 312. The process then ends in block 328.


If it is determined in block 308 that the command is a read command, controller 120 determines in block 314 whether the address for the data requested by the read command is identified in solid state memory 128 (i.e., a cache hit). If so, the requested data is read from solid state memory 128 in block 326 and self learning list 20 is updated in block 324 by inserting or reinserting self learning information at the start of the list for the data read in block 326.


If the address for the requested data is not identified in solid state memory 128 in block 314 (i.e., a cache miss), controller 120 in block 316 designates the requested data for storage in solid state memory 128. The designation in block 316 can allow controller 120 to later copy the requested data to solid state memory 128 for future start up periods. The copying can be done, for example, as part of a background activity when DSD 106 is not servicing any host commands. Such copying may be performed as part of a synchronization process such as the synchronization process of FIG. 5 described below.


The designation in block 316 may be made by marking an LBA associated with the requested data in a list of data to be copied from disk pack 134 to solid state memory 128. The designation may also be made with the use of self learning information in self learning list 20. For example, data state information of the self learning information may indicate that the data is only stored in disk pack 134 and therefore needs to be copied from disk pack 134 to NVC 18 while the self learning information remains in self learning list 20.


Controller 120 checks in block 318 whether disk pack 134 is spun up. If so, the requested data is read from disk pack 134 in block 322.


If disk pack 134 is not already spun up in block 318, controller 120 controls SM 138 in block 320 to spin up disk pack 134 to read the requested data. Controller 120 may also need to initialize or power up certain circuitry such as read/write channel 124 or servo controller 122 if it is not ready to perform a read operation on disk pack 134. The requested data is read from disk pack 134 in block 322 and the self learning list is updated in block 324 by inserting or reinserting self learning information for the requested data read in block 322. The self learning process of FIG. 3 then ends in block 328.


Example Data Eviction Process


FIG. 4 is a flowchart for a data eviction process that can be performed by controller 120 according to an embodiment. In block 402, DSD 106 receives a command from host 101 to evict certain data from solid state memory 128. In this regard, the eviction of data can include moving the data from solid state memory 128 to disk pack 134, deleting the data from solid state memory 128, and/or marking the data as invalid in solid state memory 128.


In block 404, controller 120 determines whether the data to be evicted is referenced in self learning list 20. This determination may be made by comparing the LBAs of the data to be evicted with the LBAs in the self learning information of self learning list 20.


If the data is referenced in self learning list 20, this means that the data to be evicted has recently been used during a startup period of DSD 106 and should not be evicted from solid state memory 128 since it will likely be needed during a future startup of DSD 106.


Accordingly, if it is determined in block 404 that the data to be evicted is referenced in self learning list 20, controller 120 internally overrides the eviction command in block 408 so that the data remains in solid state memory 128 despite the command to evict the data. The override of the eviction command can be transparent to host 101. In other embodiments, DSD 106 may provide host 101 with a notification that the data cannot be evicted.


If controller 120 determines in block 404 that the data is not referenced in self learning list 20, controller 120 performs the eviction of the data in block 406 since the data will not likely be needed in a future startup period. The eviction process of FIG. 4 then ends in block 410.


By checking whether data to be evicted is referenced in self learning list 20 before evicting the data from solid state memory 128, it is ordinarily possible to reduce the likelihood that disk pack 134 will need to be spun up to access the evicted data during a startup period.


Example Synchronization Process


FIG. 5 is a flowchart for a synchronization process that can be performed by controller 120 according to an embodiment. The synchronization process of FIG. 5 can be performed as a background activity after a startup period to have data used for startup stored in both disk pack 134 and solid state memory 128.


The synchronization process starts in block 502 when a background timer expires indicating that no host commands have been received for a predetermined period of time. In block 504, controller 120 determines whether there is any data designated for storage in solid state memory 128. Such data may have been designated for storage in solid state memory 128 as a result of the data not being previously available from solid state memory 128 during a startup (e.g., the designation in block 316 of FIG. 3).


If there is data designated for storage in solid state memory 128, controller 120 in block 506 reads the designated data from disk pack 134 and writes the designated data to NVC 18 in block 508.


If there is no designated data in block 504, controller 120 in block 510 determines whether there is any data designated for copying from solid state memory 128 to disk pack 134. If not, the synchronization process ends in block 518.


If controller 120 determines in block 510 that there is data designated for storage in disk pack 134, the designated data is read from solid state memory 128 in block 512 and the designated data is written to disk pack 134 in block 514. As part of writing the data to solid state memory 128, controller 120 may first check that there is enough available storage capacity in solid state memory 128 to write the data.


Self learning list 20 is updated in block 516 to reflect the current data state of the designated data being stored in both disk pack 134 and solid state memory 128 after writing the designated data in either block 508 or 514. The synchronization process of FIG. 5 then ends in block 518.


By storing startup or boot data in solid state memory 128, it is ordinarily possible to reduce the power consumed by DSD 106 during the startup period and improve the time to ready for DSD 106. Furthermore, the self learning processes disclosed above allow DSD 106 to adapt to changes in startup over time by updating the data it stores in NVC 18.


The following tables illustrate test results showing improvements for the time to transition from BIOS to an OS after startup, the time for the OS User Interface (UI) to become available after startup, and the power consumption as more startup data is stored in solid state memory in accordance with the present disclosure.












TABLE 1






Transition Time to OS




Run
(ms)
OS UI Availability (sec)
Spindle State







1
3,433
9.081
Spinning


2
3,170
8.881
Spinning


3
1,114
8.617
Spinning


4
1,106
8.350
Spinning


5
1,113
8.566
Spinning


6
1,113
8.342
Spinning


7
1,114
8.159
Spinning


8
1,113
6.557
Not Spinning


9
1,113
6.311
Not Spinning









Table 1 above shows several performance measurements across 9 consecutive startup periods for a DSD implementing the processes of FIGS. 3 and 5. The performance measurements of Table 1 include how quickly after startup the host is able to complete execution of the BIOS and transition to execution of the OS in the Transition Time to OS column. The OS UI Availability column indicates the time for the OS UI to become available after startup, and the Spindle State column indicates whether or not the spindle for the disk pack is spinning during the startup period.


As shown above, both the transition time to the OS and the time for the OS UI to become available decreased over the 9 runs. The transition time to the OS decreased by 2,320 ms or about 67%. The time for the OS UI to become available decreased by 2.77 seconds or about 30%. In addition, less power was used in runs 8 and 9 since it was no longer necessary to spin the disk pack to access data during the startup period. However, even with the disk pack still spinning in runs 2 to 7, both the transition time to the OS and the OS UI availability improved from the initial run due to more startup data being stored in the solid state memory.


Table 2 below further illustrates the power savings of the foregoing processes.











TABLE 2





Condition
Spindle State
Average Power (mW)







 0% Data Stored
Spinning
3,071


 50% Data Stored
Spinning
2,841


100% Data Stored
Not Spinning
2,772









As shown in Table 2 above, as more data is stored or cached in solid state memory, less power is used since fewer operations are performed on the disk pack. By the time all of the startup data is stored or cached in solid state memory, the disk pack no longer needs to be spun up and the average power during the startup period has been reduced by 299 mW or about 9.7%.


Those of ordinary skill in the art will appreciate that the various illustrative logical blocks, modules, and processes described in connection with the examples disclosed herein may be implemented as electronic hardware, computer software, or combinations of both. Furthermore, the foregoing processes can be embodied on a computer readable medium which causes a processor or computer to perform or execute certain functions.


To clearly illustrate this interchangeability of hardware and software, various illustrative components, blocks, and modules have been described above generally in terms of their functionality. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the overall system. Those of ordinary skill in the art may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.


The various illustrative logical blocks, units, modules, and controllers described in connection with the examples disclosed herein may be implemented or performed with a general purpose processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. A general purpose processor may be a microprocessor, but in the alternative, the processor may be any conventional processor, controller, microcontroller, or state machine. A processor may also be implemented as a combination of computing devices, e.g., a combination of a DSP and a microprocessor, a plurality of microprocessors, one or more microprocessors in conjunction with a DSP core, or any other such configuration.


The activities of a method or process described in connection with the examples disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The steps of the method or algorithm may also be performed in an alternate order from those provided in the examples. A software module may reside in RAM memory, flash memory, ROM memory, EPROM memory, EEPROM memory, registers, hard disk, a removable media, an optical media, or any other form of storage medium known in the art. An exemplary storage medium is coupled to the processor such that the processor can read information from, and write information to, the storage medium. In the alternative, the storage medium may be integral to the processor. The processor and the storage medium may reside in an Application Specific Integrated Circuit (ASIC).


The foregoing description of the disclosed example embodiments is provided to enable any person of ordinary skill in the art to make or use the embodiments in the present disclosure. Various modifications to these examples will be readily apparent to those of ordinary skill in the art, and the principles disclosed herein may be applied to other examples without departing from the spirit or scope of the present disclosure. The described embodiments are to be considered in all respects only as illustrative and not restrictive and the scope of the disclosure is, therefore, indicated by the following claims rather than by the foregoing description. All changes which come within the meaning and range of equivalency of the claims are to be embraced within their scope.

Claims
  • 1. A data storage device (DSD), comprising: a disk for storing data;a solid state memory including a non-volatile cache for storing data; anda controller configured to: receive a write command from a host to store data in the DSD;determine whether the DSD is in a startup period of the DSD, wherein the host accesses boot up data from the DSD during the startup period; andif it is determined that the DSD is in a startup period: store the data for the write command in the non-volatile cache of the solid state memory;update a list to include an entry for the data for the write command, the list including entries indicating data written during a plurality of startup periods of the DSD; anduse the list to determine whether to invalidate or delete the data for the write command in the non-volatile cache.
  • 2. The DSD of claim 1, wherein the controller is further configured to designate the data for the write command for later storage on the disk.
  • 3. The DSD of claim 1, wherein the controller is further configured to determine whether the DSD is in the startup period based on an indication received from the host.
  • 4. The DSD of claim 3, wherein the indication received from the host indicates that an operating system executing on the host has finished booting.
  • 5. The DSD of claim 1, wherein the controller is further configured to determine whether the DSD is in the startup period based on at least one of an amount of time since a startup of the DSD and an amount of data transferred between the DSD and the host since the startup of the DSD.
  • 6. The DSD of claim 1, wherein the list includes entries indicating data read during the plurality of startup periods.
  • 7. The DSD of claim 6, wherein the controller is further configured to move an entry to a beginning of the list when a read command or a write command is received from the host during a startup period.
  • 8. The DSD of claim 6, wherein the controller is further configured to: remove an entry from the list associated with least recently used data over the plurality of startup periods; anddelete the least recently used data from the solid state memory or mark the least recently used data as invalid in the solid state memory.
  • 9. The DSD of claim 6, wherein an entry in the list includes at least one of a logical address, a block size, and a physical address for data read or written during a startup period of the plurality of startup periods.
  • 10. The DSD of claim 6, wherein an entry in the list includes data state information indicating whether the data associated with the entry is stored only on the disk, only in the solid state memory, or stored in both the disk and the solid state memory.
  • 11. The DSD of claim 6, wherein the controller is further configured to: receive a command to evict data from the solid state memory;determine whether the data to be evicted is referenced in the list; andoverride the command to evict the data if it is determined that the data is referenced in the list.
  • 12. A method for operating a data storage device (DSD) including a solid state memory, the method comprising: receiving a write command from a host to store data in the DSD;determining whether the DSD is in a startup period, wherein the host accesses boot up data from the DSD during the startup period; andif it is determined that the DSD is in a startup period: storing the data for the write command in a non-volatile cache of the solid state memory;updating a list to include an entry for the data for the write command, the list including entries indicating data written during a plurality of startup periods of the DSD; andusing the list to determine whether to invalidate or delete the data for the write command in the non-volatile cache.
  • 13. The method of claim 12, further comprising designating the data for the write command for later storage on a disk of the DSD.
  • 14. The method of claim 12, further comprising determining whether the DSD is in the startup period based on an indication received from the host.
  • 15. The method of claim 14, wherein the indication received from the host indicates that an operating system executing on the host has finished booting.
  • 16. The method of claim 12, further comprising determining whether the DSD is in the startup period based on at least one of an amount of time since a startup of the DSD and an amount of data transferred between the DSD and the host since the startup of the DSD.
  • 17. The method of claim 12, wherein the list includes entries associated with data read during the plurality of startup periods.
  • 18. The method of claim 17, further comprising moving an entry to a beginning of the list when a read command or a write command is received from the host during a startup period.
  • 19. The method of claim 17, further comprising: removing an entry from the list associated with least recently used data over the plurality of startup periods; anddeleting the least recently used data from the solid state memory or marking the least recently used data as invalid in the solid state memory.
  • 20. The method of claim 17, wherein an entry in the list includes at least one of a logical address, a block size, and a physical address for data read or written during a startup period of the plurality of startup periods.
  • 21. The method of claim 17, wherein an entry in the list includes data state information indicating whether the data associated with the entry is stored only on a disk of the DSD, only in the solid state memory, or stored in both the disk and the solid state memory.
  • 22. The method of claim 17, further comprising: receiving a command to evict data from the solid state memory;determining whether the data to be evicted is referenced in the list; andoverriding the command to evict the data if it is determined that the data is referenced in the list.
CROSS-REFERENCE TO RELATED APPLICATION

This application claims the benefit of U.S. Provisional Application No. 61/897,038, filed on Oct. 29, 2013, which is hereby incorporated by reference in its entirety.

US Referenced Citations (168)
Number Name Date Kind
5333138 Richards et al. Jul 1994 A
5581785 Nakamura et al. Dec 1996 A
5586291 Lasker et al. Dec 1996 A
5758189 Nakada et al. May 1998 A
6044439 Ballard et al. Mar 2000 A
6115200 Allen et al. Sep 2000 A
6212605 Arimilli et al. Apr 2001 B1
6275949 Watanabe Aug 2001 B1
6429990 Serrano et al. Aug 2002 B2
6661591 Rothberg Dec 2003 B1
6662267 Stewart Dec 2003 B2
6687850 Rothberg Feb 2004 B1
6754021 Kisaka et al. Jun 2004 B2
6807630 Lay et al. Oct 2004 B2
6856556 Hajeck Feb 2005 B1
6909574 Aikawa et al. Jun 2005 B2
6968450 Rothberg et al. Nov 2005 B1
7017037 Fortin et al. Mar 2006 B2
7028174 Atai-Azimi et al. Apr 2006 B1
7082494 Thelin et al. Jul 2006 B1
7107444 Fortin et al. Sep 2006 B2
7120806 Codilian et al. Oct 2006 B1
7126857 Hajeck Oct 2006 B2
7142385 Shimotono et al. Nov 2006 B2
7334082 Grover et al. Feb 2008 B2
7395452 Nicholson et al. Jul 2008 B2
7411757 Chu et al. Aug 2008 B2
7430136 Merry, Jr. et al. Sep 2008 B2
7447807 Merry et al. Nov 2008 B1
7461202 Forrer, Jr. et al. Dec 2008 B2
7472222 Auerbach et al. Dec 2008 B2
7477477 Maruchi et al. Jan 2009 B2
7502256 Merry, Jr. et al. Mar 2009 B2
7509441 Merry et al. Mar 2009 B1
7509471 Gorobets Mar 2009 B2
7516346 Pinheiro et al. Apr 2009 B2
7596643 Merry, Jr. et al. Sep 2009 B2
7610438 Lee et al. Oct 2009 B2
7613876 Bruce et al. Nov 2009 B2
7644231 Recio et al. Jan 2010 B2
7653778 Merry, Jr. et al. Jan 2010 B2
7685337 Merry, Jr. et al. Mar 2010 B2
7685338 Merry, Jr. et al. Mar 2010 B2
7685360 Brunnett et al. Mar 2010 B1
7685374 Diggs et al. Mar 2010 B2
7733712 Walston et al. Jun 2010 B1
7752491 Liikanen et al. Jul 2010 B1
7765373 Merry et al. Jul 2010 B1
7898855 Merry, Jr. et al. Mar 2011 B2
7912991 Merry et al. Mar 2011 B1
7936603 Merry, Jr. et al. May 2011 B2
7962792 Diggs et al. Jun 2011 B2
8078918 Diggs et al. Dec 2011 B2
8090899 Syu Jan 2012 B1
8095851 Diggs et al. Jan 2012 B2
8108692 Merry et al. Jan 2012 B1
8122185 Merry, Jr. et al. Feb 2012 B2
8127048 Merry et al. Feb 2012 B1
8135903 Kan Mar 2012 B1
8151020 Merry, Jr. et al. Apr 2012 B2
8161227 Diggs et al. Apr 2012 B1
8166245 Diggs et al. Apr 2012 B2
8243525 Kan Aug 2012 B1
8254172 Kan Aug 2012 B1
8261012 Kan Sep 2012 B2
8296625 Diggs et al. Oct 2012 B2
8312207 Merry, Jr. et al. Nov 2012 B2
8315006 Chahwan et al. Nov 2012 B1
8316176 Phan et al. Nov 2012 B1
8341339 Boyle et al. Dec 2012 B1
8375151 Kan Feb 2013 B1
8392635 Booth et al. Mar 2013 B2
8397107 Syu et al. Mar 2013 B1
8407449 Colon et al. Mar 2013 B1
8423722 Deforest et al. Apr 2013 B1
8433858 Diggs et al. Apr 2013 B1
8443167 Fallone et al. May 2013 B1
8447920 Syu May 2013 B1
8458435 Rainey, III et al. Jun 2013 B1
8478930 Syu Jul 2013 B1
8489854 Colon et al. Jul 2013 B1
8503237 Horn Aug 2013 B1
8504771 Dawkins Aug 2013 B2
8521972 Boyle et al. Aug 2013 B1
8549236 Diggs et al. Oct 2013 B2
8583835 Kan Nov 2013 B1
8601311 Horn Dec 2013 B2
8601313 Horn Dec 2013 B1
8612669 Syu et al. Dec 2013 B1
8612804 Kang et al. Dec 2013 B1
8615681 Horn Dec 2013 B2
8638602 Horn Jan 2014 B1
8639872 Boyle et al. Jan 2014 B1
8683113 Abasto et al. Mar 2014 B2
8700834 Horn et al. Apr 2014 B2
8700950 Syu Apr 2014 B1
8700951 Call et al. Apr 2014 B1
8706985 Boyle et al. Apr 2014 B1
8707104 Jean Apr 2014 B1
8713066 Lo et al. Apr 2014 B1
8713357 Jean et al. Apr 2014 B1
8719531 Strange et al. May 2014 B2
8724422 Agness et al. May 2014 B1
8725931 Kang May 2014 B1
8745277 Kan Jun 2014 B2
8751728 Syu et al. Jun 2014 B1
8769190 Syu et al. Jul 2014 B1
8769232 Suryabudi et al. Jul 2014 B2
8775720 Meyer et al. Jul 2014 B1
8782327 Kang et al. Jul 2014 B1
8788778 Boyle Jul 2014 B1
8788779 Horn Jul 2014 B1
8788880 Gosla et al. Jul 2014 B1
8793429 Call et al. Jul 2014 B1
8825976 Jones Sep 2014 B1
8917471 Hasfar et al. Dec 2014 B1
9207947 Murphy Dec 2015 B1
20020083264 Coulson Jun 2002 A1
20060080501 Auerbach et al. Apr 2006 A1
20060108875 Grundmann et al. May 2006 A1
20060195657 Tien et al. Aug 2006 A1
20060248387 Nicholson et al. Nov 2006 A1
20070028040 Sinclair Feb 2007 A1
20080005462 Pyeon et al. Jan 2008 A1
20080040537 Kim Feb 2008 A1
20080059694 Lee Mar 2008 A1
20080130156 Chu et al. Jun 2008 A1
20080177938 Yu Jul 2008 A1
20080222353 Nam et al. Sep 2008 A1
20080256287 Lee et al. Oct 2008 A1
20080307270 Li Dec 2008 A1
20090031072 Sartore Jan 2009 A1
20090103203 Yoshida Apr 2009 A1
20090106518 Dow Apr 2009 A1
20090144501 Yim et al. Jun 2009 A2
20090150599 Bennett Jun 2009 A1
20090172324 Han et al. Jul 2009 A1
20090271562 Sinclair Oct 2009 A1
20090327603 McKean et al. Dec 2009 A1
20090327608 Eschmann et al. Dec 2009 A1
20100088459 Arya et al. Apr 2010 A1
20100174849 Walston et al. Jul 2010 A1
20100199036 Siewert et al. Aug 2010 A1
20100250793 Syu Sep 2010 A1
20100306288 Stein et al. Dec 2010 A1
20110010514 Benhase et al. Jan 2011 A1
20110099323 Syu Apr 2011 A1
20110138106 Prabhakaran et al. Jun 2011 A1
20110145489 Yu et al. Jun 2011 A1
20110283049 Kang et al. Nov 2011 A1
20120260020 Suryabudi et al. Oct 2012 A1
20120278531 Horn Nov 2012 A1
20120284460 Guda Nov 2012 A1
20120324191 Strange et al. Dec 2012 A1
20130132638 Horn et al. May 2013 A1
20130145106 Kan Jun 2013 A1
20130212325 Hashimoto Aug 2013 A1
20130290793 Booth et al. Oct 2013 A1
20140059405 Syu et al. Feb 2014 A1
20140067139 Berke et al. Mar 2014 A1
20140101369 Tomlin et al. Apr 2014 A1
20140115427 Lu Apr 2014 A1
20140133220 Danilak et al. May 2014 A1
20140136753 Tomlin et al. May 2014 A1
20140149826 Lu et al. May 2014 A1
20140157078 Danilak et al. Jun 2014 A1
20140181432 Horn Jun 2014 A1
20140223255 Lu et al. Aug 2014 A1
Non-Patent Literature Citations (9)
Entry
International Search Report and Written Opinion dated Jan. 26, 2015 from related PCT Serial No. PCT/US2014/062996, 17 pages.
Hannes Payer, Marco A.A. Sanvido, Zvonimir Z. Bandic, Christoph M. Kirsch, “Combo Drive: Optimizing Cost and Performance in a Heterogeneous Storage Device”, http://csl.cse.psu.edu/wish2009.sub.--papers/Payer.pdf, pp. 1-8.
Gokul Soundararajan, Vijayan Prabhakaran, Mahesh Balakrishan, Ted Wobber, “Extending SSD Lifetimes with Disk-Based Write Caches”, http://research.microsoft.com/pubs/115352/hybrid.pdf, Feb. 2010, pp. 1-14.
Xiaojian Wu, A. L. Narasimha Reddy, “Managing Storage Space in a Flash and Disk Hybrid Storage System”, http://www.ee.tamu.edu/.about.reddy/papers/mascots09.pdf, pp. 1-4.
Tao Xie, Deepthi Madathil, “SAIL: Self-Adaptive File Reallocation on Hybrid Disk Arrays”, The 15th Annual IEEE International Conference on High Performance Computing (HiPC 2008), Bangalore, India, Dec. 17-20, 2008, pp. 1-12.
Non-Volatile Memory Host Controller Interface revision 1.0 specification available for download at http://www.intel.com/standards/nvmhci/index.htm. Ratified on Apr. 14, 2008, 65 pages.
Obr, Nathan, “ACS Coordinating Device Maintenance,” Microsoft, Jun. 8, 2010, 6 pages.
Alain Chahwan, U.S. Appl. No. 12/720,568, filed Mar. 9, 2010, 22 pages.
“SATA31—TPR—D145—201200419—V08, Title: Hybrid Information Feature,” Proposed Draft: Serial ATA International Organization, Version 8, Apr. 19, 2012, pp. 1-79.
Related Publications (1)
Number Date Country
20150120995 A1 Apr 2015 US
Provisional Applications (1)
Number Date Country
61897038 Oct 2013 US