U.S. patent application Ser. No. 15/273,573, entitled “System and Method for Adaptive Optimization for Performance in Solid State Drives Based on Segment Access Frequency” by inventors Lip Vui Kan and Young Hwan Jang, Attorney Docket No. DC-107304.01, filed on Sep. 22, 2016, describes exemplary methods and systems and is incorporated by reference in its entirety.
The present invention relates in general to the field of server information handling system management, and more particularly to a server information handling system NFC ticket management and fault storage.
As the value and use of information continues to increase, individuals and businesses seek additional ways to process and store information. One option available to users is information handling systems. An information handling system generally processes, compiles, stores, and/or communicates information or data for business, personal, or other purposes thereby allowing users to take advantage of the value of the information. Because technology and information handling needs and requirements vary between different users or applications, information handling systems may also vary regarding what information is handled, how the information is handled, how much information is processed, stored, or communicated, and how quickly and efficiently the information may be processed, stored, or communicated. The variations in information handling systems allow for information handling systems to be general or configured for a specific user or specific use such as financial transaction processing, airline reservations, enterprise data storage, or global communications. In addition, information handling systems may include a variety of hardware and software components that may be configured to process, store, and communicate information and may include one or more computer systems, data storage systems, and networking systems.
Information handling systems generally process information held in persistent storage using instructions also stored in persistent storage. Generally, at power up of an information handling system, embedded code loads onto the processor to “boot” an operating system by retrieving the operating system from the persistent storage device, such as a solid state drive (SSD) or hard disk drive (HDD), to random access memory (RAM) interfaced with the processor. Executing instructions from RAM typically provides more rapid information transfers than executing instructions from persistent storage, such as flash memory. However, since RAM consumes power when storing information, long term storage of information in RAM is not typically cost effective compared with persistent storage devices that store information using flash memory, magnetic disks, magnet tapes, laser discs and other non-volatile memory media that do not consume power to store the information. Once the operating system executes on the processor from RAM, other applications that run over the operating system are retrieved from persistent storage to RAM for execution. Similarly, information processed by the operating system and applications, such as documents and images, are retrieved to RAM from persistent memory for modification and then stored again in persistent memory for long term storage during power down of the information handling system.
One difficulty with executing applications and processing information from persistent storage is that retrieval and writing of instructions and information from and to persistent storage takes longer than similar operations in RAM. For example, a user who initiates an application from an SSD will typically experience some lag as the application is retrieved from the SSD into RAM. Similar lag typically occurs during writes of information from RAM to the SSD. A typical NAND read operation can take a magnitude order of 1000 compared to read operations from DRAM so that host media command completion time is in the range of milliseconds. Another difficulty with flash memory, such as NAND found in many SSDs, is that with writes over time the flash memory wears until the memory becomes unusable. In order to maximize the useful life of flash memory, storage devices often implement wear leveling algorithms that attempt to even out the program/erase cycles of the flash memory across the storage device. A typical wear leveling algorithm uses address indirection to coordinate use of different memory addresses over time.
In order to improve the speed of read and write operations while managing wear leveling, persistent storage devices generally include a controller that executes embedded code to interface an information handling system processor with the storage device's non-volatile memory. The information handling system operating system references stored information by using a Logical Block Address (LBA), which the storage device controller translates to a physical address. Referencing an LBA allows the operating system to track information by a constant address while shifting the work of translation to physical addresses to specialized hardware and embedded code of the storage device controller. The storage device controller is then free to perform wear leveling by adapting logical addresses to physical addresses that change over time. A flash translation table (FTL) managed by the storage device controller tracks the relationship between logical and physical memory addresses.
Generally, storage device controllers include a RAM buffer that stores the FTL table for rapid address lookup by a processor integrated in the storage device controller. On power up of the storage device, the storage device controller retrieves the FTL table from non-volatile memory to RAM and then responds to operating system LBA interactions by looking up physical addresses in the FTL. As a general rule, 1 MB of RAM indexes physical addresses for 1 GB of non-volatile memory. Thus, as an example, a 512 MB RAM FTL buffer supports a 512 GB SSD.
One recent innovation by Dell Inc. for improved persistent storage device performance is “System and Method for Adaptive Optimization for Performance in Solid State Drives Based on Segment Access Frequency,” by Lip Vui Kan and Young Hwan Jang, application Ser. No. 15/273,573, Docket Number DC-107304, filed on Sep. 22, 2016, which is incorporated herein as if fully set forth. This innovation reduces the size of RAM buffer for storing an FTL table by limiting the number of LBAs in the FTL table that are loaded to RAM, thus reducing the size of RAM used by the storage device controller.
Therefore, a need has arisen for a system and method which caches information at a storage device controller.
In accordance with the present invention, a system and method are provided which substantially reduce the disadvantages and problems associated with previous methods and systems for interacting with persistent storage devices. A storage device controller selectively loads all or only a portion of a translation table in a translation table memory. If only a portion of the translation table is loaded, the unused translation table memory is repurposed to cache information stored in the persistent storage device.
More specifically, a host information handling system host executes an operating system to manage information, such as with reads and writes to a persistent storage device. The host communicates requests to a persistent storage device controller using logical block addresses. The persistent storage device controller translates the logical block address to a physical address of persistent storage to read or write information at the physical address location. In an example embodiment, the persistent storage device is a solid state drive having NAND flash memory that the storage device controller wear levels by reference to a flash translation layer table stored in a DRAM integrated with the storage controller. A cache manager selectively loads all or only a portion of the flash translation table to the DRAM based upon predetermined conditions, such as an analysis that only the selected portions of logical block addresses will be referenced by the host. If only a portion of the translation table is loaded, then unused DRAM is repurposed to cache information related to selected of the logical block addresses in the DRAM. If the host references a logical block address that has information cached in the translation table memory, then the storage controller responds using the cached information. Thus, for example, a read request by an operating system to a logical block address having cached information stored in the repurposed translation table memory will receive a more rapid response from the storage controller by looking up the information in the translation table memory cache instead of retrieving the information from flash memory of the persistent storage device.
The present invention provides a number of important technical advantages. One example of an important technical advantage is that a storage device controller translation table memory is selectively repurposed to provide a more rapid response to reads from persistent storage. When only a portion of the translation table is loaded to a translation table memory, unused memory space in the translation table memory is repurposed to cache information stored in the persistent storage device. The translation table memory provides a rapid response to requests for information from the persistent storage device when the information is cached. Selection of commonly referenced information to store in the cache based upon historical references focuses a rapid cache response to information more frequently requested by a host device. Predictive algorithms in the storage device controller or at the host, such as the operating system, optimize selection of information for caching in the translation memory.
The present invention may be better understood, and its numerous objects, features and advantages made apparent to those skilled in the art by referencing the accompanying drawings. The use of the same reference number throughout the several figures designates a like or similar element.
An information handling system persistent storage device selectively caches information in translation table memory. For purposes of this disclosure, an information handling system may include any instrumentality or aggregate of instrumentalities operable to compute, classify, process, transmit, receive, retrieve, originate, switch, store, display, manifest, detect, record, reproduce, handle, or utilize any form of information, intelligence, or data for business, scientific, control, or other purposes. For example, an information handling system may be a personal computer, a network storage device, or any other suitable device and may vary in size, shape, performance, functionality, and price. The information handling system may include random access memory (RAM), one or more processing resources such as a central processing unit (CPU) or hardware or software control logic, ROM, and/or other types of nonvolatile memory. Additional components of the information handling system may include one or more disk drives, one or more network ports for communicating with external devices as well as various input and output (I/O) devices, such as a keyboard, a mouse, and a video display. The information handling system may also include one or more buses operable to transmit communications between the various hardware components.
Referring now to
In an example embodiment, on power up CPU 12 retrieves and executes operating system 20 from persistent storage in a bootstrapping process. Operating system 20 includes instructions and information stored in persistent memory that is retrieved to RAM 14 for execution by CPU 12. The example persistent storage device is a solid state drive 18 (SSD) that includes an integrated controller 24, NAND flash memory modules 26 and random access memory (RAM) 27. SSD controller 24 receives logical block address (LBA) requests from operating system 20, converts the LBAs to physical addresses of NAND 26, applies the requested action at the physical address associated with the LBA, and responds to operating system 20 with a LBA. RAM 27 supports SSD controller 24 by providing a fast response buffer to store information used by SSD controller 24. In one embodiment, RAM 27 may actually have separate physical memories that support separate tasks, such as buffering information for transfer to and from NAND 26 and storing a translation table that maps NAND locations to operating system memory requests. In the example embodiment, RAM 27 integrates with SSD 18; however in alternative embodiments, some buffer functions may be supported with system RAM 14. In the example embodiment, solid state drive 18 includes a wear leveling algorithm that spreads program/erase (P/E) cycles across NAND devices to promote the life span of the flash memory over time. Wear leveling is accomplished at SSD controller 24 so that operating system 20 interacts with information through LBAs while the actual physical storage location of information can change within the persistent storage. A dedicated portion of RAM 27 stores a translation table that maps operating system LBA requests to physical NAND addresses. In alternative embodiments, other types of persistent storage devices may be used, with or without wear leveling.
Referring now to
In order to translate LBAs to physical addresses, processor 30 references a flash translation layer (FTL) table 38 stored in translation table memory 34, depicted as a RAM buffer. FTL table 38 includes mapping for all possible LBAs to physical addresses of flash 26 so that, as wear leveling changes the physical address that is associated with an LBA, processor 30 is able to find information referenced by a host device. In a typical SSD, each GB of flash memory uses about 1 MB of translation table memory to map LBA to physical addresses. Thus, for example a 512 GB SSD will have a translation table memory size of 512 MB. In the example embodiment, translation table memory 34 is a DRAM buffer that provides rapid responses so that processor 30 can rapidly retrieve physical addresses for LBA requests. For example, a DRAM buffer is integrated in SSD controller 24 and dedicated to mapping LBA to physical addresses. In alternative embodiments, alternative types of memory may be used in alternative configurations for storing FTL table 38.
As is set forth in greater detail in U.S. patent application Ser. No. 15/273,573, incorporated herein as if fully set forth, in some predetermined conditions, copying less than all of FTL table 38 to translation table memory 34 provides adequate support for address translation. For example, a typical host device will span 8 GB for data locality during normal operations. By predicting the span of persistent memory needs and loading only the FTL table 38 used for the predicted span, less time is take to load the FTL table 38 data and less memory space is used.
For instance, using the above example numbers, a 512 MB translation table memory 34 will need only 8 MB of FTL table data to support operating system LBA requests, leaving 504 MB of unused memory. A 24 MB FTL table 38 provides a sufficiently high hit ratio to sustain IO operations with minimal impact of data throughput performance when unloaded FTL data has to be retrieved to respond to LBAs not supported in a partial FTL table load.
If less than all of FTL table 38 is loaded to translation table memory 34, then a cache manager 39 executing as embedded code on processor 30 takes advantage of unused translation table memory 34 to define a cache 40 of information retrieved from flash memory 26. Cache manager 39 retrieves information associated with selected of LBAs in the partial FTL table 38 load and stores the information in cache 40. As processor 30 receives LBA requests from the host device, cache manager 39 looks up the LBA in translation table memory 34 to determine if the information associated with the LBA is already stored in cache 40, and if so, responds to the host device request with the cached information. By responding from cache 40, processor 30 provides a more rapid response without having to look up the information in flash memory 26. If the LBA request is to write information to flash memory 26, then cache manager 39 commands a write of the updated information to cache 40 to keep cache 40 synchronized with flash memory 26.
Cache manager 39 selects information to cache based upon predictions of the information that the host device will most frequently request from flash memory 26. In some instances, the selected information adapts as functions on host device change. For example, particular LBA requests may relate to an application or set of data so that cache manager 39 refreshes cache 40 to prepare for anticipated LBA requests. For example, at host device startup, cache manager 39 loads information associated with LBAs that are called more frequently as start. As another example, at start of an application loaded at an LBA, cache manager 39 may load the LBA of the last document used by the application. In one example embodiment, cache manager 39 executes as embedded code save in the flash memory integrated in processor 30. In alternative embodiments, all or part of cache manager 39 may execute as instructions running with the host device operating system. For example, upon end user selection of a function, the operating system communicates a span of LBAs that processor 30 loads into cache 40.
Referring now to
Referring now to
Referring now to
If at step 74 the information is not in cache, the process continues to step 88. At step 88, if the request is to read information, then a read of the information is performed from a NAND physical address based upon a LBA to physical address translation. At step 90, if the request is a write, then a write is performed to a NAND physical address based upon a LBA to physical address translation. At step 92, the host IO interface responds to the logical address request and at step 94 the process ends.
Referring now to
Although the present invention has been described in detail, it should be understood that various changes, substitutions and alterations can be made hereto without departing from the spirit and scope of the invention as defined by the appended claims.