A power-savings option for personal computers and other computing devices is to put the device in hibernation mode. When the device is set to hibernation mode, the data from the device's volatile RAM is saved to non-volatile storage as a hibernation file. With the hibernation file saved, the device does not need to power the volatile RAM and other components, thus conserving battery life of the device. When the computing device exits hibernation mode, the stored hibernation file is retrieved from non-volatile storage and copied back to the device's RAM, thus restoring the device to the state it was in prior to hibernation.
Embodiments of the present invention are defined by the claims, and nothing in this section should be taken as a limitation on those claims.
By way of introduction, the below embodiments relate to storage and host devices for overlapping storage areas for a hibernation file and cached data. In one embodiment, a storage device is provided that receives a command from a host device to evict cached data in a first address range of the memory. The storage device then receives a command from the host device to store a hibernation file in a second address range of the memory, wherein the second address range does not exist in the memory. The storage device maps the second address range to the first address range and stores the hibernation file in the first address range. In another embodiment, a host device is provided that sends a command to a first storage device to evict cached data in a first address range of the first storage device's memory. The host device then sends a command to the first storage device to store a hibernation file in the first address range.
Other embodiments are possible, and each of the embodiments can be used alone or together in combination. Accordingly, various embodiments will now be described with reference to the attached drawings.
By way of introduction, some of the below embodiments can be used to make a solid-state drive appear to a host device as having both a dedicated partition for a hibernation file and a separate dedicated partition for cached data even though it, in fact, has only a single partition for cached data. This is accomplished by over-provisioning the solid-state drive and exposing a larger capacity to the host device (e.g., a 16 GB solid-state drive will be seen by the host device as having 20 GB). When a hibernation event occurs, a caching module on the host device evicts cached data in the solid-state drive to make room for the hibernation file. A hibernation module on the host device then copies a hibernation file to the solid-state drive. However, as the hibernation module is not aware of the smaller capacity of the solid-state drive, it may be attempting to write the hibernation file to an address range that does not exist on the solid-state drive. Accordingly, a controller in the solid-state drive can map the “extra” logical block address (LBA) space that the host device thinks exists onto a range within the physical space of the solid-state drive that actually exists, thereby overlapping the hibernation area with the caching area. (Since the data was evicted, even though the address space overlaps, no data loss actually occurred.) When the hibernation mode is exited, the hibernation module requests the hibernation file from the non-existent address range. The solid-state drive maps the address to the real address and returns the hibernation file to the host device (the mapping can occur before entering hibernation mode and also when exiting hibernation mode). As the hibernation file in the solid-state drive is no longer needed after the data from the file is restored in the host drive's memory, the caching module can repurposed that space by repopulating the cache to its full capacity.
In another embodiment, both the caching module and the hibernation module on the host device are aware of the space limitations of the solid-state drive, so the hibernation module can provide suitable addresses to the solid-state drive. This avoids the need for the solid-state drive to perform address translation on “out of range” addresses.
Turning now to the drawings,
In this embodiment, the host device 10 takes the form of a personal computer, such as an ultrabook. The host device 10 can take any other suitable form, such as, but not limited to, a mobile phone, a digital media player, a game device, a personal digital assistant (PDA), a kiosk, a set-top box, a TV system, a book reader, a medical device, or any combination thereof.
As shown in
As noted above, the storage sub-system 50 contains both a solid-state drive 100 and a hard disk drive 150 (or a hybrid hard drive, as noted above). The solid-state drive 100 contains a controller 110 having a CPU 120. The controller 110 can be implemented in any suitable manner. For example, the controller 110 can take the form of a microprocessor or processor and a computer-readable medium that stores computer-readable program code (e.g., software or firmware) executable by the (micro)processor, logic gates, switches, an application specific integrated circuit (ASIC), a programmable logic controller, and an embedded microcontroller, for example. Examples of controllers include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicon Labs C8051F320. The controller 110 also has an interface 130 to the host device 10 and a memory interface 140 to a solid-state (e.g., flash, 3D, BICS, MRAM, RRAM, PCM and others) memory device 145. The solid-state drive 100 can contain other components (e.g., a crypto-engine, additional memory, etc.), which are not shown in
To illustrate this embodiment, consider the situation in which the host device 10 is a personal computer, such as an ultrabook, and the solid-state drive 100 and the hard disk drive 150 are internal (or external) drives of the computer. (In one embodiment, the solid-state drive 100 is embedded in the motherboard of the computer 10.) In this example, the hard disk drive 150 has a larger capacity than the solid-state drive 50 (e.g., 320 GB vs. 16 GB). In operation, the larger-capacity hard disk drive 150 is used as conventional data storage in the computer, and the smaller-capacity solid state drive 100 is used to cache frequently-accessed data to reduce the amount of time needed to retrieve or store the data from the storage sub-system (because the solid-state drive 100 has a faster access time than the hard disk drive 150). The caching module 30 on the computer/host device 10 is responsible for moving data into and out of the solid-state drive 100, as needed.
In addition to caching, this “dual drive” system can be used in conjunction with a hibernation mode. As discussed above, hibernation mode (also referred to as the “S4 state”) is a power-savings option for personal computers and other computing devices, in which data from the device's volatile RAM is saved to non-volatile storage as a hibernation file. With the hibernation file saved, the device 10 does not need to power the volatile DRAM 40, thus conserving battery life of the device 10. When the device 10 exits hibernation mode, the stored hibernation file is retrieved from non-volatile storage and copied back to the device's DRAM 40, restoring the device to the state it was in prior to hibernation. With the use of a dual drive SSD/HDD storage subsystem, 50, the hibernation file is preferably stored in the solid-state drive 100, as the faster access time of the solid-state drive 100 allows the device 10 to exit hibernation mode faster.
The hibernation module 35 of the host device 10 is responsible for storing and retrieving the hibernation file. The hibernation module 35 can also perform other activities relating to the hibernation process. For example, the hibernation module 35 can work with the host's BIOS to enable a smooth transition from a system standby mode (the “S3 mode”) to the hibernation mode (the “S4 mode”) using a timer, and can perform compression on the data to be stored in the hibernation file. Examples of a hibernation module 35 include Intel's Smart Response Technology (SRT) and Intel's Fast Flash Storage (iFFS) software. Of course, these are only examples, and the hibernation module can take other forms.
One issue that can arise in the use of a dual-drive system is that the requirements for data caching may be in conflict with the requirements for hibernation. For example, Intel's 2012 ultrabook requirements specify that the minimum caching partition (i.e., the size of the solid-state drive 100) has to be at least 16 GB. The requirements also specify a dedicated partition (e.g., an iFFS partition) in the solid-state drive 100 of 4 GB (or 2 GB) to store the hibernation file (e.g., an iFFS file), so the computer can exit the hibernation mode in seven seconds or less. Accordingly, these two requirements result in the need for the solid-state drive to have a capability of 20 GB. However, many solid-state drives today are sold either as 16 GB drives or 24 GB drives. While a 24 GB drive will meet the requirements, it may not be a cost-effective solution.
To address this problem, in one embodiment, the controller 110 of the solid-state drive 100 maps the 4 GB hibernation partition and the 16 GB caching partition into a single 16 GB physical space partition. (4 GB and 16 GB are merely examples, and it should be understood that these embodiments can be applied to other sizes.) This embodiment takes advantage of the fact that the hibernation file and the cached data are not used at the same time. That is, when the hibernation file is stored when the host device 10 is in hibernation mode; thus, the host device 10 would not be requesting cached data. Likewise, cached data would be requested when the host device 10 is active and not in hibernation mode; thus, a hibernation file would not be needed. This embodiment will now be described in more detail in conjunction with
As shown in
As mentioned above, after the host device 10 resumes from a hibernation event, the caching module 30 rebuilds the “overlapped” 4 GB cache that was evicted to make room for the hibernation file. It is possible that this process of rebuilding the cache can result in lower cache hit rates immediately after hibernation events in the periods in which the cache is rebuilt. In order to avoid this, prior to the hibernation event, the caching module 30 can copy the 4 GB of cached data from the 12 GB-16 GB LBA range of the solid-state drive 100 into the hard disk drive 150 prior to de-populating or evicting it from the cache. (When a hybrid hard drive is used, the solid-state drive can send the copy directly to the hard disk drive instead of sending the copy to the host device for it to store in the hard disk drive.) This is shown as “Step 0” in
In the above embodiment, the caching module 30 was aware of the space limitations of the solid-state drive 100 but the hibernation module 35 was not so aware. Thus, the controller 110 in the solid-state drive 100 was used to translate address ranges provided by the hibernation module. In another embodiment, both the caching module and the hibernation module are aware of the space limitations of the solid-state drive, so the hibernation module can provide suitable addresses to the solid-state drive. This avoids the need for the solid-state drive to perform address translation on “out of range” addresses. So, in the example set forth above, after the caching module evicts 4 GB of data from the cache, the hibernation module would send a request to the solid-state drive to store the hibernation file in the 12 GB-16 GB LBA range, instead of the non-existent 16 GB-20 GB address range. Additionally, this “aware” hibernation module can perform some or all of the other activities that the “unaware” hibernation module described above performed (e.g., working with the host's BIOS to enable a smooth transition from a system standby mode (the “S3 mode”) to the hibernation mode (the “S4 mode”) using a timer, performing compression on the data to be stored in the hibernation file, etc.). In one particular implementation, Intel's Smart Response Technology (SRT) or iFFS software is modified to be aware of the capacity limitations of the solid-state drive and perform the above processes.
It is intended that the foregoing detailed description be understood as an illustration of selected forms that the invention can take and not as a definition of the invention. It is only the following claims, including all equivalents, that are intended to define the scope of the claimed invention. Finally, it should be noted that any aspect of any of the preferred embodiments described herein can be used alone or in combination with one another.