The features and advantages of the disclosed subject matter will become apparent from the following detailed description of the subject matter in which:
According to embodiments of the subject matter disclosed in this application, a computing system may conserve most power by entering the S4 state (rather than the S3 state) over long periods of inactivity and also be able to resume from the S4 state rapidly to provide a quick response. Rather than storing hibernate data in the HDD, a non-volatile cache may be used to cache the hibernate data when the system enters the S4 state. The non-volatile cache may be made of flash memory and may be coupled to a bus that connects the HDD with the disk controller. When resuming from the S4 state, the hibernate data may be read from the non-volatile cache and hence resume time may be reduced because access latency to the non-volatile cache is much shorter than to the HDD. Both the caching and resuming processes may be performed in an OS-transparent manner (e.g., by storage driver and Option Read-Only-Memory (ROM)). The resume time may be further reduced by using an efficient resuming process which relies on a mapping table to help search desired data in the non-volatile cache. Additionally, the non-volatile cache may also be used as a disk cache to improve Input/Output (I/O) performance and to reduce power consumption.
Reference in the specification to “one embodiment” or “an embodiment” of the disclosed subject matter means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the disclosed subject matter. Thus, the appearances of the phrase “in one embodiment” appearing in various places throughout the specification are not necessarily all referring to the same embodiment.
Additionally, chipset 130 may comprise a memory controller 125 that is coupled to a main memory 150 through a memory bus 155. The main memory 150 may store data and sequences of instructions that are executed by multiple cores of the processor 110 or any other device included in the system. The memory controller 125 may access the main memory 150 in response to memory transactions associated with multiple cores of the processor 110, and other devices in the computing system 100. In one embodiment, memory controller 125 may be located in processor 110 or some other circuitries. The main memory 150 may comprise various memory devices that provide addressable storage locations which the memory controller 125 may read data from and/or write data to. The main memory 150 may comprise one or more different types of memory devices such as Dynamic Random Access Memory (DRAM) devices, Synchronous DRAM (SDRAM) devices, Double Data Rate (DDR) SDRAM devices, or other memory devices.
Moreover, chipset 130 may include a disk controller 170 coupled to a hard disk drive (HDD) 190 (or other disk drives not shown in the figure) through a bus 195. The disk controller allows processor 110 to communicate with the HDD 190. In some embodiments, disk controller 170 may be integrated into a disk drive (e.g., HDD 190). There may be different types of buses coupling disk controller 170 and HDD 190, for example, the advanced technology attachment (ATA) bus and PCI Express (PCI-E) bus.
An OS (not shown in the figure) may run in processor 110 to control the operations of the computing system 100. The OS may use the ACPI for managing power consumption by different components in the system. Under the ACPI, there are 4 sleep states S1 through S4. The time needed to bring the system back into normal wakeup working state (wake-latency time) is shortest for S1, short for S2 and S3, and not so short for S4. S1 is the most power-hungry of sleep modes with processor(s) and Random Access Memory (RAM) powered on. S2 is a deeper sleep state than S1, where the processor is powered off. The most common sleep states are S3 and S4. In S3 state, main memory (RAM) 150 is still powered and the user can quickly resume work exactly where he/she left off—the main memory content when the computer comes back from S3 is the same as when it was put into S3. S4 is the hibernation state, under which content of main memory 150 is saved to HDD 190, preserving the state of the operating system, all applications, open documents etc. The system may be put into either S3 (sleep) state or S4 (hibernation) state manually or automatically after a certain period of inactivity.
Since the main memory is not powered on in S4 state, a system can save more power in S4 state than in S3 state. However, the resume time is much longer from S4 state than from S3 state since the main memory content needs to be read from a hard drive. When a micro-drive is used, the resume time from S4 state can even be longer than the resume time with a typical HDD. For an ultra mobile PC, it is desirable to have the instant-on resuming capability while still saving as much power as possible (and thus extend battery life). Therefore, it is desirable to reduce the resume time from S4 state for an ultra mobile PC. According to one embodiment of the subject matter disclosed in this application, a non-volatile cache (NV cache) may be used to cache the main memory content. For example, a NV cache (not shown in
OS file services 415 provide services to non-critical OS services 405 and applications. For example, OS file services 405 handle non-critical writes for non-critical OS services 405; and facilitate data prefetches for periodic applications. Components in the application lawyer such as non-critical OS services 405 and applications 410 do not directly deal with components in the controller layer and the hardware layer, but through OS components. For example, an application reads from or writes to memory 475 through memory driver 430; and reads from or writes to HDD 485 through OS/OEM disk driver. OS power management services 425 may use the ACPI to manage power consumption by different components in system 400. For example, when the OS puts the system into S4 hibernation state, power management services 425 request that an image be generated for content in memory 475, and the image be written to HDD 485. After completing writing the image to the HDD, the power management services 425 turn off power of memory 475 and other hardware components in the hardware layer. OS power management services 425 communicate with the memory and the HDD through the memory driver and the OS/OEM disk driver, respectively.
Memory driver 430 and OS/OEM disk driver 435 serve as interfaces between the OS and the controller layer, and facilitate any communication between the OS and memory 475 and HDD 485, respectively. When booting or resuming from a hibernation state, the BIOS boot service loads the first 512 bytes of the storage media. The first 512 bytes usually will include the OS first level boot loader that loads the OS second level loader (shown as OS loader 440 in
Memory controller 460 and disk controller 465 serve as hardware side interfaces to the OS for memory 475 and HDD 485, respectively. The memory controller and the disk controller are typically located within a chipset. In some computing systems, however, there might not be a chipset and the hardware side memory and disk controllers may reside within relevant chips that communicate between the OS and memory and HDD using appropriate software drivers. BIOS/Option ROM 455 helps determine what a system can do before the OS, is up and running. The BIOS includes firmware codes required to control basic peripherals such as keyboard, mouse, display screen, disk drive, serial communications, etc. The BIOS is typically standardized, especially for PCs. To customize some functions controlled by the BIOS, Option ROM may be used, which may be considered as an extension of BIOS to support OEM (Original Equipment Manufacturer) specific proprietary functionalities. When a system is booting up or resuming from S4 state, the BIOS calls code stored in the Option ROM. Thus, if a user desires a system to boot up differently from a standard booting process, the user may write his/her own booting code and store it in the Option ROM. The Option Rom may also include proprietary code to access memory controller 460 and disk controller 465.
According to one embodiment of the subject matter disclosed in this application, an NV cache 490 may be added to system 400. The NV cache may be coupled to disk bus 480 and be used to cache memory content when the system enters S4 state. The NV cache may be made of flash memory. When the system resumes from S4 state, the memory content (or hiberfile) can be restored from the NV cache rather than the HDD. Because the access latency to the NV cache is much shorter than the access latency to the HDD, restoring the memory content from the NV cache can significantly reduce the resuming time and thus provide instant-on or near instant-on experience for the user. Additionally, the NV cache may also be used as a disk cache in a normal wakeup working state. As a disk cache, the NV cache may help improve system I/O performance and reduce average system power consumption since the disk can be spun down for longer periods of time. Moreover, the subject matter disclosed herein may be extended to utilize the NV cache (such as flash memory) as a fast storage device for OS and applications combined with a slower storage device for data.
In one embodiment, caching and restoring the memory content using the NV cache may be performed entirely by the OS. In another embodiment, this can be done in an OS transparent manner. For example, caching the memory content in the NV cache may be done by the storage driver (e.g., OS/OEM disk driver 435); and restoring the memory content from the NV cache may be done by code in the Option ROM. Although OS/OEM disk driver 435 is shown in
At block 550, a cache image may be created for a data block in each write if there is enough room available in the NV cache for the data block. At block 560, the cache image may be written to the NV cache. The cache image of a block of data to be written to the NV cache may still be in the form of an SRB, but metadata of the SRB needs to include the LBA of the block of data on the NV cache. Additionally, information specific to reads/writes to/from the HDD may be removed from the cache image. A mapping table, which correlates LBAs of data blocks on the HDD and the addressed of the same data blocks on the NV cache, may also be created while writing blocks of data to the NV cache. After completing writing the memory image to the NV cache or when the NV cache is full, the mapping table may be written to the NV cache.
Logical block addressing (LBA) is a common scheme used for specifying the location of blocks of data stored on computer storage devices, generally secondary storage systems such as hard disks. The term LBA can mean either the address or the block to which it refers. Since LBA was first developed around SCSI (Small Computer System Interface) drives, LBA is often mentioned along with SCSI Request Block (SRB). Under the LBA scheme, blocks on disk are simply located by an index, with the first block being LBA=0, the second LBA=1, and so on. Most modern computers, especially PCs, support the LBA scheme. When an OS sends a data request (either a write or a read request) to HDD, the request typically includes LBA—the logical start address of the data block on the HDD, and the sector count—size of the data block on the disk. Typically in storage disk terms, a sector is also considered a logical block. For convenience of description, a data block is considered as a sequence of contiguous sectors in this application.
Turning back to
For the following description, several notations are used for the convenience. Specifically, reqLBA is the logical start address of a data block that is requested to be read; reqLBACount indicates the number of sectors that are to be read starting from the reqLBA; and cacheLBA is the actual logical start address of the requested data block in the NV cache. tableLBA[i] is the logical start address of a data block in a mapping table entry; tableLBACount[i] is the count of sectors in the table entry; tableCacheLBA[i] is the logical start address of the mapped data block in the table entry; where i is the index of the entry in the table. Basically, tableLBA[i], tableLBACount[i], and tableCacheLBA[i] correspond to values in columns 710, 720, and 730 for entry i, respectively.
When process 800 starts at block 805, a current entry index is initialized with the index of the first entry (i.e., 0) in the mapping table if the reqLBA is the very first one; and with the index of the entry at which the process had stopped searching for the previous reqLBA if the reqLBA is not the very first one. If the reqLBA is determined to be within the bounds of the mapping table at block 810, a further check may be performed to determine if the request is really available in the mapping table by checking whether the reqLBA is available within the current entry in the mapping table at block 815. This further check may be conducted in a circular linear manner. This check may start searching from the entry at which it had stopped searching for the previous reqLBA. After the last entry in the table is reached, the search wraps around to the first entry and continues till the entry before the entry at which it had stopped searching for the previous reqLBA.
For a reqLBA to be present within a table entry, the reqLBA should be greater than or equal to the current table entry's start address of data block; and the (reqLBA+reqLBACount) should be less than or equal to the table entry's start address of data block plus table entry's data block size in sectors. The purpose of the check at block 815 is not to see if only a part of the reqLBA is available within a table entry. During the caching process, all data blocks that have contiguous LBAs are merged and shown in only one entry in the mapping table. Also when a system resumes from S4 state, most of data blocks requested typically have a contiguous LBA. Thus, if only a part of the reqLBA is available within a table entry, the requested block is split, i.e., part of it is on the NV cache and part of it is on the HDD. In a case of splitting data block, partially serving it from NV cache and from disk is more costly than serving this request from the disk since it requires multiple requests and a merge prior to providing the data block to the OS. Therefore, the entire block started with the reqLBA should be available within a table entry for the reqLBA to be considered to be present in the table.
If the reqLBA is not in the current entry, the current entry index may be set with the index of the next entry in the mapping table at block 820. Block 830 determines whether the last entry in the mapping table has been checked for the reqLBA. Whether the last entry has just been checked may be determined by whether the current entry index equals to the total number of entries. If the current entry index equals to the total number of entries in the mapping table, the last entry has just been checked. Then the current entry index may be reset to the index of the first entry in the mapping table at block 845. If the last entry has not been checked yet, the next entry in the mapping table is checked for the reqLBA at block 815. Block 850 determines whether the current entry index equals the last index, which is the index of the entry at which the process had stopped searching for the previous reqLBA. If the answer is “no,” the next entry in the mapping table is checked for the reqLBA at block 815; otherwise, a value of −1 may be returned at block 855, which indicates that the reqLBA is not present in the mapping table, and the process may end at block 860.
Once the reqLBA is found in the current entry at block 815, then the start address of the reqLBA in the NV cache, i.e., cacheLBA, is calculated by adding the offset of the reqLBA from the tableLBA[i] to the tableCacheLBA[i] where i is the index of the current table entry at block 835. Note that the start address of the requested data block and its size in sectors may not always match the start address of a data block in a table entry and its size. The start address of the requested data block may be at an offset (in sectors) from the start address of a data block in a table entry, which may be calculated at block 825. The cacheLBA of the reqLBA may be returned at block 840, and the process may end at block 860. If the reqLBA is not found in the mapping table, the requested data block may be read from disk rather than from the NV cache.
Although an example embodiment of the disclosed subject matter is described with reference to block and flow diagrams in
In the preceding description, various aspects of the disclosed subject matter have been described. For purposes of explanation, specific numbers, systems and configurations were set forth in order to provide a thorough understanding of the subject matter. However, it is apparent to one skilled in the art having the benefit of this disclosure that the subject matter may be practiced without the specific details. In other instances, well-known features, components, or modules were omitted, simplified, combined, or split in order not to obscure the disclosed subject matter.
Various embodiments of the disclosed subject matter may be implemented in hardware, firmware, software, or combination thereof, and may be described by reference to or in conjunction with program code, such as instructions, functions, procedures, data structures, logic, application programs, design representations or formats for simulation, emulation, and fabrication of a design, which when accessed by a machine results in the machine performing tasks, defining abstract data types or low-level hardware contexts, or producing a result.
For simulations, program code may represent hardware using a hardware description language or another functional description language which essentially provides a model of how designed hardware is expected to perform. Program code may be assembly or machine language, or data that may be compiled and/or interpreted. Furthermore, it is common in the art to speak of software, in one form or another as taking an action or causing a result. Such expressions are merely a shorthand way of stating execution of program code by a processing system which causes a processor to perform an action or produce a result.
Program code may be stored in, for example, volatile and/or non-volatile memory, such as storage devices and/or an associated machine readable or machine accessible medium including solid-state memory, hard-drives, floppy-disks, optical storage, tapes, flash memory, memory sticks, digital video disks, digital versatile discs (DVDs), etc., as well as more exotic mediums such as machine-accessible biological state preserving storage. A machine readable medium may include any mechanism for storing, transmitting, or receiving information in a form readable by a machine, and the medium may include a tangible medium through which electrical, optical, acoustical or other form of propagated signals or carrier wave encoding the program code may pass, such as antennas, optical fibers, communications interfaces, etc. Program code may be transmitted in the form of packets, serial data, parallel data, propagated signals, etc., and may be used in a compressed or encrypted format.
Program code may be implemented in programs executing on programmable machines such as mobile or stationary computers, personal digital assistants, set top boxes, cellular telephones and pagers, and other electronic devices, each including a processor, volatile and/or non-volatile memory readable by the processor, at least one input device and/or one or more output devices. Program code may be applied to the data entered using the input device to perform the described embodiments and to generate output information. The output information may be applied to one or more output devices. One of ordinary skill in the art may appreciate that embodiments of the disclosed subject matter can be practiced with various computer system configurations, including multiprocessor or multiple-core processor systems, minicomputers, mainframe computers, as well as pervasive or miniature computers or processors that may be embedded into virtually any device. Embodiments of the disclosed subject matter can also be practiced in distributed computing environments where tasks may be performed by remote processing devices that are linked through a communications network.
Although operations may be described as a sequential process, some of the operations may in fact be performed in parallel, concurrently, and/or in a distributed environment, and with program code stored locally and/or remotely for access by single or multi-processor machines. In addition, in some embodiments the order of operations may be rearranged without departing from the spirit of the disclosed subject matter. Program code may be used by or in conjunction with embedded controllers.
While the disclosed subject matter has been described with reference to illustrative embodiments, this description is not intended to be construed in a limiting sense. Various modifications of the illustrative embodiments, as well as other embodiments of the subject matter, which are apparent to persons skilled in the art to which the disclosed subject matter pertains are deemed to lie within the scope of the disclosed subject matter.
This application is related to commonly assigned U.S. application Ser. No. ______ (Attorney Docket No. 42P24468), concurrently filed by Ram Chary and Pradeep Sebastian and entitled “Configuring a Device for Operation on a Computing Platform,” and is related to commonly assigned U.S. application Ser. No. ______ (Attorney Docket No. 42P24527), concurrently filed by Ulf R. Hanebutte, Ram Chary, Pradeep Sebastian, Shubha Kumbadakone, and Shreekant S. Thakkar and entitled “Method and Apparatus for Caching Memory Content on a Computing System to Facilitate Instant-On Resuming from a Hibernation State.”