1. Technical Field
This disclosure relates to computer systems including host systems and data storage systems. More particularly, the disclosure relates to caching data in a high performance zone of data storage systems, such as hard disk drives and hybrid drives.
2. Description of the Related Art
Users of host systems, such as personal computers, find that operating the host systems can be frustrating due to a sluggish responsiveness by the systems to user requests. For example, when users turn on host systems, the host systems can be slow to transition from a power-off state to a power-on state where the host systems are fully operative for the users. Users may, in some cases, wait one or two minutes after turning on the host systems before the users can request to run an application on the host systems. In addition, even after users request to run the application, the host systems may take another 10 to 20 seconds to load the application before the users can use the application.
Systems and methods that embody the various features of the invention will now be described with reference to the following drawings, in which:
While certain embodiments are described, these embodiments are presented by way of example only, and are not intended to limit the scope of protection. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the scope of protection.
Overview
To improve the response time of a host system to user requests, the host system can cache data likely to be accessed for a user to a data cache. The data cache can be located in a storage medium having a faster access time than other storage medium used to store data for the host system. As a result, the host system can retrieve a significant amount of data from the data cache in the storage medium having the faster access time rather than from the other storage medium, thereby increasing the responsiveness of the host system to requests of the user.
In some implementations, a host system can store host system data in a storage system that includes a non-volatile solid-state memory array and a hard disk media. The host system can use the memory array as a data cache for caching data likely to be accessed data for the user, while using the hard disk media to store other data for the host system. Since the memory array can offer a faster access time than the hard disk media in some cases, the memory array may be a higher performance zone of memory than the hard disk media and preferred for use as the data cache.
In some embodiments of the present invention, a host system can store host system data in a storage system that includes a hard disk media. The host system can use a dedicated zone of the hard disk media as a data cache for caching data likely to be accessed data for the user, while using a remainder of the hard disk media to store other data for the host system. The dedicated zone can be a zone that includes or consists of multiple data sectors of the hard disk media that are contiguous with one another, enabling faster reading from the dedicated zone by limiting the seek range of the head while reading. Additionally or alternatively, the dedicated zone can be a zone that includes part of an outside diameter (OD) zone of the hard disk media that has a faster data read rate than other zones of the hard disk media. Advantageously, the host system can realize improved response time to user requests without the storage system including both the hard disk media and a non-volatile solid-state memory array.
In some embodiments of the present invention, a host system can store host system data in a storage system that includes various types of memories having different performance characteristics. The host system can use a high performance memory or zone as a data cache for caching data likely to be accessed data for the user, while using the other memories or zones of the memories to store other data for the host system. If the high performance memory becomes inoperative or the performance degrades below a threshold, the host system can use a different high performance memory or zone for caching data likely to be accessed for the user. For instance, the host system can initially cache data to a non-volatile solid-state memory array of the storage system. However, once performance of the memory array falls below a threshold metric, the host system can instead cache data to a zone of the hard disk media that includes or consists of multiple data sectors of the hard disk media that are contiguous with one another.
System Overview
The cache management module 111 can determine host system data to cache to a memory of the host system 110 or storage system 120A. For example, the cache management module 111 can determine copies of data, such as application or operating system data of the host system 110, to store in a high performance zone of memory having relatively a quick retrieval time for stored data. In some embodiments, the cache management module 111 can utilize an intelligent process to select data to cache in the memory based on an observed frequency or recency of access of data by the user of the host system 110 or based on hints from the operating system module 114 or the application module 115. When data selected by the cache management module 111 is cached, the end-user experience of the host system 110 can be improved because the cached data may be accessible more quickly or easily than data stored in other areas of memory of the host system 110 or storage system 120A.
The driver 112 can receive messages from the other components of the host system 110. Based on the received messages from the other components, the driver 112 can communicate storage access commands and/or accompanying host system data to the storage system 120A. The storage access commands can include, for example, read data or write data commands issued for particular logical addresses (e.g., LBAs) of the storage system 120A. In response to the read data commands transmitted to the storage system 120A, the driver 112 can receive stored data from the storage system 120A. In some embodiments, the driver 112 assigns particular storage access commands and/or host system data to partitions of the storage system 120A based on partition identifiers (IDs) in a Master Boot Record (MBR) that correspond to groups of logical addresses of the storage system 120A. In one embodiment, the driver 112 is loaded by the operating system of the host system 110.
The driver 112, in conjunction with the cache management module 111, can maintain a zone of the hard disk media 122 dedicated to caching data for the host system 110. The cache management module 111 can select data to cache to the dedicated zone, and the driver 112, in turn, can issue write data commands to cache the selected data to a dedicated cache partition of the hard disk media 122 that includes the dedicated zone. When the host system 110 retrieves the selected data from the storage system 120A, the driver 112 can issue read data commands to retrieve the cached data from the dedicated cache partition rather than another partition of the storage system 120A. In one embodiment, the driver 112 and the cache management module 111 are integrated as a single module/driver.
The controller 121 can be configured to receive data and/or storage access commands from the driver 112 of the host system 110. The storage access commands communicated by the driver 112 can include the write data and read data commands issued by the host system 110. Read and write commands can specify a logical address used to access the hard disk media 122. The controller 121 can execute the received commands in the hard disk media 122.
The storage system 120A can store data communicated by the host system 110. In other words, the storage system 120A can act as memory storage for the host system 110. To facilitate this function, the controller 121 can implement a logical interface. The logical interface can present the memory of the hard disk media 122 as a set of logical addresses (e.g., contiguous addresses) where host system data can be stored. Internally, the controller 121 can map logical addresses to various physical locations or addresses in the hard disk media 122. In some embodiments, particular logical addresses correspond to certain memory locations of the hard disk media 122, such as the physical locations or zones of memory as discussed with respect to
The partitioning module 113 can partition the hard disk media 122 and divide the hard disk media 122 into multiple logical storage units, which can then each be treated by the host system 110 as if they are separate or independent disks. In addition, the partitioning module 113 can assign certain ranges of LBAs to particular partitions, and the LBAs correspond to physical zones of the hard disk media. For instance, the partitioning module 113 can assign LBAs corresponding to one physical zone (e.g., an OD zone of the hard disk media 122) to a particular partition and not assign LBAs corresponding to another physical zone to one partition of the hard disk media 122. By partitioning the hard disk media 122 to include one or more physical zones, the partitioning module 113 can enable the host system 110 to designate one or more zones or regions of the hard disk media 122 for specific or dedicated uses, such as for caching data for the host system 110, storing application data, or storing operating system data. In some embodiments, each zone having a dedicated use can include less than about 1%, 5%, 10%, or 25% of the data sectors of a storage medium. For example, a zone dedicated to caching data for the host system 110 can includes less than about 10% of the data sectors of hard disk media 122. The use of a dedicated zone for caching limits the motion of the head while seeking within the zone, and thus increases read performance and random access performance. In one embodiment, the use of such an arrangement enables a hard disk drive to have a similar performance as a hybrid drive with a hard disk media and a non-volatile solid-state memory cache, but without the additional cost of having the solid-state memory cache.
The operating system module 114 can run the operating system for the host system 110 and control general operations of the host system 110. The operating system module 114 can additionally provide hints of data likely to be accessed by the operating system module 114 to the cache management module 111 so that the cache management module 111 can better determine data to cache for the host system 110. In some embodiments, the operating system module 114 stores operating system data in a dedicated partition of the hard disk media 122. In other embodiments, the operating system module 114 does not store operating system data in a dedicated partition.
The application module 115 can run applications on the host system 110 that enable the host system 110 to perform specific or custom functions. In addition, the application module 115 can provide hints of data likely to be accessed by the application module 115 to the cache management module 111 so that the cache management module 111 can better determine data to cache for the host system 110. In some embodiments, the application module 115 stores application data in a dedicated partition of the hard disk media 122. In other embodiments, the application module 115 does not store application data in a dedicated partition.
The volatile memory 116 can store data for the other components of the host system 110. For example, the driver 112 can issue storage access commands to the volatile memory 116 to cache data for the host system 110.
The controller 121 of the storage system 120B can be configured to receive data and/or storage access commands from the driver 112 of the host system 110. Read and write commands from the driver 122 can specify a logical address used to access the hard disk media 122 or the non-volatile solid-state memory array 123. The controller 121 can map logical addresses to various physical locations or addresses in the hard disk media 122 and the non-volatile solid-state memory array 123 and accordingly execute the received commands in the hard disk media 122 or the non-volatile solid-state memory array 123 based on the specified logical address.
Because the disk 210 may be rotated at a constant angular velocity, the data rate can increase toward the outer diameter tracks as the linear velocity is higher in relation to that of the inner diameter tracks (where the radius is smaller). In addition, tracks can be grouped into a number of physical zones, wherein the data rate is substantially constant or within a certain range across a zone, and is increased from the inner diameter zones to the outer diameter zones. As is illustrated in
Although each of the four zones illustrated in
Data Caching
At block 305, the process 300 partitions a storage system. For example, the process 300 can partition a hard disk media of the storage system into multiple partitions. Each partition can include or exclude one or more zones, such as physical zones, of the hard disk media. In addition, the process 300 can assign one or more other storage media, such as a non-volatile solid-state memory array or another hard disk media, to a particular partition of the storage media. In some embodiments, the process 300 partitions the storage system into one partition dedicated to caching data for a host system, another partition dedicated to storing operating system data for the host system, and yet another partition for storing application data for the host system. Either or both of the partitions dedicated to caching data for the host system and storing operating system data can include one or more high performance zones of the hard disk media to enable quicker access of this data than if the data were stored in other zones of the storage system. For instance, the partition dedicated to caching data for the host system can include Zone 1212, and the partition dedicated to storing operating system data can include Zone 2214.
At block 310, the process 300 determines data to cache in one partition of the storage system, such as a partition dedicated to caching data for the host system. The process 300 can determine what data to cache to the partition by selecting data likely to be accessed for a user of the host system so that the user experience for the user of the host system can be improved. For example, the process 300 can determine data to cache based on a frequency or recency of when the data was accessed by the user or based on hints from an operating system or applications running on the host system.
At block 315, the process 300 communicates a cache write command and write data to the storage system. The cache write command can indicate to the storage system to cache the write data in one region of the storage system, such as a dedicated cache zone or partition of the hard disk media or the non-volatile solid-state memory array.
At block 320, the process 300 communicates a cache read command to the storage system. The cache read command can indicate to the storage system to read the cached write data from a particular partition of the storage system, such as the dedicated cache zone or partition of the hard disk media or the non-volatile solid-state memory array.
At block 325, the process 300 receives cached data from the storage system. The process 300 can receive the cached data in response to the cache read commands transmitted at block 320.
At block 405, the process 400 determines host system data to cache to a high performance zone of memory. For example, the process 400 can determine data to cache based on a frequency or recency of when the data was accessed by the user or based on hints from an operating system or applications running on the host system.
At block 410, the process 400 determines whether a non-volatile solid-state memory array of a storage system is in an operative or a non-operative state. The process 400 can make this determination, for instance, based on whether the memory array can be powered or whether performance by the non-volatile solid-state memory array satisfies a reliability metric (e.g., a number of blocks marked as unusable is below a threshold, or a number of read errors when reading a test pattern is below a threshold).
If the non-volatile solid-state memory array is in the operative state, the process 400 moves to block 415. At block 415, the decision is made to cache host system data in the non-volatile solid-state memory array of the storage system. For example, in some embodiments, if the controller 121 of the storage system 120B determines that the non-volatile solid-state memory array 123 is in the operative state, the controller 121 caches data for the host system 110 in the non-volatile solid-state memory array 123. In some embodiments, if the driver 112 of the host system 110 determines that the non-volatile solid-state memory array 123 is in the operative state, the driver 112 issues appropriate storage access commands. Moreover, an indication that the data are cached in the non-volatile solid-state memory array 123 can be stored at the host system 110 or in the storage system 120B so that the cached data can be correctly read.
If the non-volatile solid-state memory array is in the non-operative state, the process 400 moves to block 420. In one embodiment, when the non-volatile solid-state memory array has reached a point where it is in a read-only mode (otherwise still accessible), it is deemed to be in the non-operative state. In another embodiment, the non-volatile solid-state memory array may be deemed to be in the non-operative state when its reliability metric reaches or falls below a threshold. At block 420, the decision is made to cache host system data in an alternate storage medium. The alternate storage medium can include a local storage of the host system (e.g., the volatile memory 116) or other storage medium of the storage system (e.g., another non-volatile solid-state memory array or a partition of a hard disk media). This decision can be made by the controller 121 of the storage system 120B or by the driver 112 of the host system 110, for instance. When the decision is made by the controller 121, the controller 121 can select to cache the data in another high performance zone of the storage system 120B, such as a high performance zone of the hard disk media 122. When the decision is made by the driver 112, the driver 112 can issue appropriate storage access commands to set up a cache partition in the alternate storage medium accessible to the driver so that the data are stored to the alternate storage medium. The alternative storage medium may be the volatile memory 116 or a partition including a high performance zone of the hard disk media 122.
In some embodiments, since the non-volatile solid-state memory array can be a preferred memory or quicker access memory of the memories included in the host system or the storage system, the non-volatile solid-state memory array can be the default high performance zone for caching data for the host system as illustrated in
Other Variations
Those skilled in the art will appreciate that in some embodiments, other types of communications can be implemented between system components. Further, additional system components can be utilized, and disclosed system components can be combined or omitted. For example, the functionality of the cache management module 111 can implemented by the driver 112 and vice versa. In addition, the actual steps taken in the disclosed processes, such as the processes 300 and 400 illustrated in
While certain embodiments have been described, these embodiments have been presented by way of example only, and are not intended to limit the scope of the protection. Indeed, the novel methods and systems described herein may be embodied in a variety of other forms. Furthermore, various omissions, substitutions and changes in the form of the methods and systems described herein may be made without departing from the spirit of the protection. The accompanying claims and their equivalents are intended to cover such forms or modifications as would fall within the scope and spirit of the protection. For example, the systems and methods disclosed herein can be applied to hard disk drives, hybrid hard drives, and the like. In addition, other forms of storage (e.g., DRAM or SRAM, battery backed-up volatile DRAM or SRAM devices, EPROM, EEPROM memory, etc.) may additionally or alternatively be used. As another example, the various components illustrated in the figures may be implemented as software and/or firmware on a processor, ASIC/FPGA, or dedicated hardware. If implemented in software, the functions can be stored as one or more instructions on a computer-readable medium. A storage media can be any available media that can be accessed by a computer, such as RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Also, the features and attributes of the specific embodiments disclosed above may be combined in different ways to form additional embodiments, all of which fall within the scope of the present disclosure.
Although the present disclosure provides certain preferred embodiments and applications, other embodiments that are apparent to those of ordinary skill in the art, including embodiments which do not provide all of the features and advantages set forth herein, are also within the scope of this disclosure. Accordingly, the scope of the present disclosure is intended to be defined only by reference to the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
6795894 | Neufeld et al. | Sep 2004 | B1 |
6856556 | Hajeck | Feb 2005 | B1 |
7099993 | Keeler | Aug 2006 | B2 |
7126857 | Hajeck | Oct 2006 | B2 |
7430136 | Merry, Jr. et al. | Sep 2008 | B2 |
7447807 | Merry et al. | Nov 2008 | B1 |
7461202 | Forrer, Jr. et al. | Dec 2008 | B2 |
7502256 | Merry, Jr. et al. | Mar 2009 | B2 |
7509441 | Merry et al. | Mar 2009 | B1 |
7596643 | Merry, Jr. et al. | Sep 2009 | B2 |
7653778 | Merry, Jr. et al. | Jan 2010 | B2 |
7685337 | Merry, Jr. et al. | Mar 2010 | B2 |
7685338 | Merry, Jr. et al. | Mar 2010 | B2 |
7685374 | Diggs et al. | Mar 2010 | B2 |
7733712 | Walston et al. | Jun 2010 | B1 |
7765373 | Merry et al. | Jul 2010 | B1 |
7800856 | Bennett et al. | Sep 2010 | B1 |
7898855 | Merry, Jr. et al. | Mar 2011 | B2 |
7912991 | Merry et al. | Mar 2011 | B1 |
7936603 | Merry, Jr. et al. | May 2011 | B2 |
7962792 | Diggs et al. | Jun 2011 | B2 |
7966450 | Klein | Jun 2011 | B2 |
8074017 | Cavallo | Dec 2011 | B2 |
8078918 | Diggs et al. | Dec 2011 | B2 |
8090899 | Syu | Jan 2012 | B1 |
8095851 | Diggs et al. | Jan 2012 | B2 |
8108692 | Merry et al. | Jan 2012 | B1 |
8122185 | Merry, Jr. et al. | Feb 2012 | B2 |
8127048 | Merry et al. | Feb 2012 | B1 |
8135903 | Kan | Mar 2012 | B1 |
8151020 | Merry, Jr. et al. | Apr 2012 | B2 |
8161227 | Diggs et al. | Apr 2012 | B1 |
8166245 | Diggs et al. | Apr 2012 | B2 |
8243525 | Kan | Aug 2012 | B1 |
8254172 | Kan | Aug 2012 | B1 |
8261012 | Kan | Sep 2012 | B2 |
8281076 | Hashimoto et al. | Oct 2012 | B2 |
8296625 | Diggs et al. | Oct 2012 | B2 |
8312207 | Merry, Jr. et al. | Nov 2012 | B2 |
8316176 | Phan et al. | Nov 2012 | B1 |
8341339 | Boyle et al. | Dec 2012 | B1 |
8375151 | Kan | Feb 2013 | B1 |
8392635 | Booth et al. | Mar 2013 | B2 |
8397107 | Syu et al. | Mar 2013 | B1 |
8407449 | Colon et al. | Mar 2013 | B1 |
8423722 | Deforest et al. | Apr 2013 | B1 |
8433858 | Diggs et al. | Apr 2013 | B1 |
8443167 | Fallone et al. | May 2013 | B1 |
8447920 | Syu | May 2013 | B1 |
8458435 | Rainey, III et al. | Jun 2013 | B1 |
8478930 | Syu | Jul 2013 | B1 |
8489854 | Colon et al. | Jul 2013 | B1 |
8503237 | Horn | Aug 2013 | B1 |
8521972 | Boyle et al. | Aug 2013 | B1 |
8549236 | Diggs et al. | Oct 2013 | B2 |
8583835 | Kan | Nov 2013 | B1 |
8601311 | Horn | Dec 2013 | B2 |
8601313 | Horn | Dec 2013 | B1 |
8612669 | Syu et al. | Dec 2013 | B1 |
8612804 | Kang et al. | Dec 2013 | B1 |
8615681 | Horn | Dec 2013 | B2 |
8638602 | Horn | Jan 2014 | B1 |
8639872 | Boyle et al. | Jan 2014 | B1 |
8683113 | Abasto et al. | Mar 2014 | B2 |
8700834 | Horn et al. | Apr 2014 | B2 |
8700950 | Syu | Apr 2014 | B1 |
8700951 | Call et al. | Apr 2014 | B1 |
8706985 | Boyle et al. | Apr 2014 | B1 |
8707104 | Jean | Apr 2014 | B1 |
8745277 | Kan | Jun 2014 | B2 |
20030028719 | Rege | Feb 2003 | A1 |
20040088479 | Hall | May 2004 | A1 |
20050066121 | Keeler | Mar 2005 | A1 |
20060080501 | Auerbach et al. | Apr 2006 | A1 |
20060253650 | Forrer et al. | Nov 2006 | A1 |
20080222353 | Nam et al. | Sep 2008 | A1 |
20100174849 | Walston et al. | Jul 2010 | A1 |
20100250793 | Syu | Sep 2010 | A1 |
20110099323 | Syu | Apr 2011 | A1 |
20110283049 | Kang et al. | Nov 2011 | A1 |
20120260020 | Suryabudi et al. | Oct 2012 | A1 |
20120278531 | Horn | Nov 2012 | A1 |
20120284460 | Guda | Nov 2012 | A1 |
20120300329 | Benhase et al. | Nov 2012 | A1 |
20120300336 | Benhase et al. | Nov 2012 | A1 |
20120324191 | Strange et al. | Dec 2012 | A1 |
20130132638 | Horn et al. | May 2013 | A1 |
20130132664 | Benhase et al. | May 2013 | A1 |
20130145106 | Kan | Jun 2013 | A1 |
20130290793 | Booth et al. | Oct 2013 | A1 |
20140059405 | Syu et al. | Feb 2014 | A1 |
20140115427 | Lu | Apr 2014 | A1 |
20140133220 | Danilak et al. | May 2014 | A1 |
20140136753 | Tomlin et al. | May 2014 | A1 |
Entry |
---|
U.S. Appl. No. 11/950,324, filed Dec. 4, 2007, 21 pages. |