Automatic read and write acceleration of data accessed by virtual machines

Information

  • Patent Grant
  • 9699263
  • Patent Number
    9,699,263
  • Date Filed
    Friday, March 15, 2013
    11 years ago
  • Date Issued
    Tuesday, July 4, 2017
    7 years ago
Abstract
The various implementations described herein include methods and systems for automatic management of data access acceleration in a computer system executing a plurality of clients. The method includes: receiving data access commands from two or more clients to access data in objects identified by the data access commands; and processing the data access commands to update access history information for portions of the objects identified by the data access commands. The method further includes: in accordance with the access history information, automatically identifying and marking for acceleration the portions of the objects identified by the data access commands that satisfy an access based data acceleration policy; and accelerating the object portions marked for acceleration, including accelerating data writes and data reads of the object portions to and from the persistent cache, where the persistent cache is shared by the two or more clients.
Description
TECHNICAL FIELD

The disclosed embodiments relate generally to accelerating access to data read and written by a set of virtual machines through the selective use of a persistent cache.


BACKGROUND

Server virtualization is the masking of physical server resources (e.g., processors, memory, etc.) from users of the server. Such server resources include the number and identifications of individual physical servers, processors and operating systems. Server virtualization more efficiently utilizes server resources, improves server availability and assists in testing and development.


Virtualized enterprise data centers deploy tens of thousands of virtual machines over hundreds of physical servers. Efficient hardware utilization is a key goal in improving virtual machine systems. One of the main factors contributing to efficient hardware utilization is virtual machine density. The more virtual machines that can be run on a physical server the more efficient the hardware utilization.


Access to server storage resources is often a system bottleneck that prevents full utilization of available hardware resources. The embodiments described below are configured to improve access to server storage resources.


SUMMARY

A server system, executing a plurality of virtual machines, accelerates frequently accessed data in a persistent cache. The persistent cache is shared by the plurality of virtual machines, and the persistent cache is typically much smaller in capacity than secondary storage; thus, only a small subset of secondary storage is accelerated. Determining which data to accelerate (e.g., which portions of the virtual disks used by the virtual machines in the server system to accelerate), however, is challenging and non-automated processes are non-workable. For example, the management of acceleration by IT personnel or a system administrator would be difficult, if not impossible, in systems having hundreds or thousands of virtual machines. Various embodiments of the systems and methods described here are therefore designed for automatic management of data access acceleration in a server system that executes a plurality of virtual machines.


The server system retains and updates access history information for portions (e.g., blocks or sub-blocks) of the objects (e.g., virtual disks) associated with data access commands received from the virtual machines. In accordance with the access history information and an access based acceleration policy the server system determines which portions of the objects to accelerate. The shared persistent cache significantly improves input-output (I/O) performance of a respective server system executing a plurality of virtual machines, leading to an increase in the number of virtual machines that can run on each server system in a virtualized data center.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a distributed system, in accordance with some embodiments.



FIG. 2 is a block diagram of a server system included in FIG. 1, in accordance with some embodiments.



FIGS. 3A-3B illustrate data structures utilized by the server system included in FIG. 2, in accordance with some embodiments.



FIG. 4 is a schematic diagram of a persistent cache, in accordance with some embodiments.



FIGS. 5A-5B illustrate a flow diagram of a process for accelerating data read operations, in accordance with some embodiments.



FIG. 6 illustrates a flow diagram of a process for accelerating data write operations, in accordance with some embodiments.



FIGS. 7A-7B illustrate a flow diagram of a process for accelerating data access, in accordance with some embodiments.





Like reference numerals refer to corresponding parts throughout the drawings.


DESCRIPTION OF EMBODIMENTS

This detailed description covers methods and systems for automatic management of disk acceleration in a distributed system having a plurality of virtual machines running on each of a set of physical servers. Other related concepts will also be covered in this detailed description.


In some embodiments, a method for accelerating data access is performed by a computer system having one or more processors, memory and a persistent cache for storing accelerated data. The method includes: receiving data access commands from two or more clients to access data in objects identified by the data access commands; and processing the data access commands to update access history information for portions of the objects identified by the data access commands from the two or more clients. The method further includes: in accordance with the access history information, automatically identifying and marking for acceleration portions of the objects identified by the data access commands that satisfy an access based data acceleration policy, where the automatically identifying and marking are performed collectively for the two or more clients; and accelerating the object portions marked for acceleration, by accelerating data access, including either or both accelerating data writes and data reads of the object portions to and from the persistent cache, where the persistent cache is shared by the two or more clients, and the access history information is based on the data access commands of the two or more clients.


In some embodiments, a method for accelerating data read operations is performed by a computer system having one or more processors, memory and a persistent cache for storing accelerated data. The method includes receiving data read commands from two or more clients to read data from objects identified by the data read commands and processing the data read commands to update usage history information for portions of the objects identified by the data read commands. The method further includes determining whether a respective portion of the objects identified by a data read command from a respective client of the two or more clients is stored in the persistent cache, where the persistent cache is shared by the two or more clients. In accordance with a determination that the respective portion of the objects identified by the data read command from the respective client is stored in the persistent cache, the method includes returning the respective portion of the objects from the persistent cache to the respective client of the two or more clients. In accordance with a determination that the respective portion of the objects identified by the data read command from the respective client is not stored in the persistent cache, the method includes identifying and marking for acceleration the respective portion of the objects identified by the data read command from the respective client if the respective portion of the objects satisfies an access based data acceleration policy in accordance with the usage history information. In accordance with a determination that the respective portion of the objects is not marked for acceleration, the method includes processing the data read command from the respective client, by reading from the secondary storage the respective portion of the objects, and returning the respective portion of the objects read from the secondary storage to the respective client of the two or more clients. In accordance with a determination that the respective portion of the objects is marked for acceleration, the method includes processing the data read command from the respective client by reading from the secondary storage the respective portion of the objects, writing the respective portion of the objects to the persistent cache, and returning the respective portion of the objects to the respective client of the two or more clients.


In some embodiments, a method for accelerating data write operations is performed by a computer system having one or more processors, memory and a persistent cache for storing accelerated data. The method includes: receiving data write commands from two or more clients to write data to objects identified by the data write commands, where the persistent cache is shared by the two or more clients; and processing the data write commands to update usage history information for portions of the objects identified by the one or more data write commands. The method further includes automatically identifying and marking for acceleration a respective portion of the objects if the respective portion of the objects satisfies an access based data acceleration policy in accordance with the access history information, where the automatically identifying and marking are performed collectively for the two or more clients. In accordance with a determination that the respective portion of the objects is marked for acceleration, the method includes writing the respective portion of the objects to the persistent cache and subsequently or concurrently writing the respective portion of the objects to the secondary storage. In accordance with a determination that the respective portion of the objects is not marked for acceleration, the method includes writing the respective portion of the objects to the secondary storage.


In another aspect, a computer system includes one or more processors, a persistent cache for storing accelerated data, and memory storing one or more programs for execution by the one or more processors, wherein the one or more programs include instructions that when executed by the one or more processors cause the server system to perform any of the aforementioned methods.


In yet another aspect, a non-transitory computer readable medium stores one or more programs that when executed by one or more processors of a computer system cause the computer system to perform any of the aforementioned methods.


Numerous details are described herein in order to provide a thorough understanding of the example implementations illustrated in the accompanying drawings. However, some embodiments may be practiced without many of the specific details, and the scope of the claims is only limited by those features and aspects specifically recited in the claims. Furthermore, well-known methods, components, and circuits have not been described in exhaustive detail so as not to unnecessarily obscure more pertinent aspects of the implementations described herein.



FIG. 1 is a block diagram of a distributed system 100 including a secondary storage system 130 (sometimes called a storage system) connected to a plurality of computer systems 110 (e.g., server systems 110a through 110m) through a communication network 120 such as the Internet, other wide area networks, local area networks, metropolitan area networks, wireless networks, or any combination of such networks. In some embodiments, a respective server system 110 executes a plurality of virtual machines 112 and includes a respective persistent cache 118 shared by the plurality of virtual machines executed on the respective server system 110. In some embodiments, the persistent cache 118 comprises non-volatile solid state storage, such as flash memory. In some other examples, persistent cache 118, comprises EPROM, EEPROM, battery backed SRAM, battery backed DRAM, supercapacitor backed DRAM, ferroelectric RAM, magnetoresistive RAM, or phase-change RAM.


In some implementations, each of the plurality of the virtual machines 112 is a client 114. Each client 114 executes one or more client applications 116 (e.g., a financial application or web hosting application) that submit data access commands (e.g., data read and write commands) to the respective server system 110. The data access commands access data in objects, such as virtual disks, some portions of which may be stored in RAM in by the server, while other portions are stored in storage system 130. The respective server system 110, in turn, sends corresponding data access commands to storage system 130 so as to obtain or store data in accordance with the data access commands.


In some embodiments, secondary storage system 130 includes a front-end system 140, which obtains and processes data access commands from server systems 110 and returns results to the server systems 110. Secondary storage system 130 further includes one or more secondary storage subsystems 150 (e.g., storage subsystems 150a-150n). In some embodiments, a respective storage systems stores the data for one or more objects (e.g., one or more virtual disks) accessible to clients on a respective server system 110. Each of the one or more objects comprises a plurality of portions. For example, a respective portion of the plurality of portions of an object is a block (also herein called an address block, since the block corresponds to a block of addresses), and the block comprises a plurality of sub-blocks (e.g., a respective sub-block is a page within a block). In another example, a portion of the object is a sub-block (also herein called an address sub-block, since the sub-block corresponds to an address or sub-block of addresses).


In some embodiments, a server system 110 allocates a distinct address space to each respective virtual machine 112 executed on the server system 110, and furthermore allocates space within the address space for one or more objects (e.g., one or more virtual disks). In some embodiments, each object accessed by a respective virtual machine 112 is denoted by a corresponding object identifier (Object ID), or in the case of a virtual disk, a virtual disk ID.



FIG. 2 is a block diagram of a server system 110 (sometimes herein called a computer system). Server system 110 includes one or more processors or processing units (CPUs) 210, one or more communication interfaces 230, memory 240, persistent cache 118, and one or more communication buses 220 for interconnecting these components. The communication buses 220 may include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. Memory 240 includes high-speed random access memory and optionally includes non-volatile memory. In some embodiments, memory 240 comprises a non-transitory computer readable storage medium. In some implementations, memory 240 stores the following programs, modules and data structures, or a subset or superset thereof:

    • an operating system 242 including procedures for handling various basic system services and performing hardware dependent tasks;
    • a plurality of virtual machines 112 (e.g., virtual machines 112a to 112v in FIG. 1);
    • a network communication module 244 configured to connect server system 110 to communication network(s) 120 in FIG. 1 via the one or more communication interfaces 230;
    • an access history update module 245 configured to update information within access history database 250;
    • a memory access decision module 246 configured to determine whether a respective portion of an object (e.g., a block or sub-block of an object identified by a data access command) is present in persistent cache 118;
    • an acceleration determination module 248 configured to determine whether to accelerate various portions (e.g., blocks or sub-blocks) of the objects accessed by data access commands received from two or more clients;
    • a tier assignment module 249 at least configured to assign (or reassign) respective portions (e.g., blocks) of the objects to tiers within tiered data structure 252;
    • an access history database 250 configured to store one or more items of usage history information, including tiered data structure 252 configured to organize portions (e.g., blocks) of the objects into a plurality of tiers, object portion usage metadata 254 and object portion to node map 256 configured to map a respective portion (e.g., a block) of the objects to a corresponding node of a plurality of nodes 320 in access history database 250;
    • a cache management driver 420, described below;
    • a cache address map 410, described below; and
    • a plurality of pointers including clean pointer 412, flush pointer 414, read pointer 418 and write pointer 416.


Each of the elements identified above may be stored in one or more of the previously mentioned memory devices of server system 110, and each element corresponds to a set of instructions for performing a function described above. The modules or programs (i.e., sets of instructions) identified above need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments.



FIG. 3A illustrates a plurality of data structures within access history database 250, including a plurality of nodes 320 (also sometimes called node data structures), each of which stores access history information, as well as other information, for a portion of an object. In accordance with a respective object identifier (object ID), portion to node map 256 maps the portion (e.g., a block) of an object to a node 320 in access history database 250. Each node 320, excluding any unallocated nodes, contains information associated with the state and access history of a corresponding portion (e.g., a block, also sometimes called an address block) of an object. In some implementations, each respective node 320 of two or more of the nodes 320 includes the following items of information, or a subset or superset thereof:

    • an object ID plus offset 312 identifying the object portion (i.e., the portion of an object) corresponding to the respective node;
    • a tier number 314 identifying the tier within tiered data structure 252 to which the respective node (and the object portion corresponding to the node) belongs;
    • two or more linked list pointers 316 identifying nodes that re positioned immediately before and after the respective node in linked list 360; linked list 360 is list of all the nodes 320 assigned to the same tier as the respective node (i.e., the tier have the tier number in field 314);
    • a write operations count 318 indicating a number of times data has been written to the respective portion of the objects (e.g., the respective block) corresponding to the respective node;
    • a read operations count 322 indicating a number of times data has been read from the object portion corresponding to the respective node;
    • a most recently used (MRU) marker 324 indicating the last time an operation was performed on the object portion corresponding to the respective node; in some implementations, the MRU marker is a timestamp indicating execution time for the last operation performed on the respective object portion; in some other implementations, the MRU marker stores an operation count value (similar to a serial number or other sequentially assigned value) indicating the last operation performed on the object portion corresponding to the respective node;
    • a write acceleration flag 326 indicating whether the object portion corresponding to the respective node is marked for write acceleration; and
    • a plurality of subset read counts 330, each indicating a number of times that data has been read from a respective subset of the object portion (e.g., a sub-block of an address block of an object) corresponding to the respective node.


In some implementations, each subset read count 330 has a value of 0 (not read), 1 (read once) or 2 (read 2 or more times). Subset read counts 330 function as read acceleration flags, where values 0 and 1 correspond to a flag value of “off” or “disabled,” and a value of 2 corresponds to a flag value of “on” or “enabled.” Furthermore, any attempt to increment a subset read count 330 that is already equal to 2 results in a value of 2 for the sub-block.



FIG. 3B illustrates tiered data structure 252, which is part of usage history database 250, according to some embodiments. In some embodiments, tiered data structure 252 is a bucket array, with each bucket representing a different tier. Tiered data structure 252 contains tier numbers 0-N (e.g., tiers 352-358), where N is a positive integer greater than 1 and typically greater than 10 once the server system has processed a large number of data access commands. In some implementations, each tier of tiered data structure 252 includes the following items of information, or a subset or superset thereof:

    • a tier number 314;
    • a list head 342 containing a pointer to the head of a linked list 360 of nodes 320 within a respective tier;
    • a list end 344 containing a pointer to the end of the linked list 360 of nodes 320 within the respective tier;
    • a portion count 346 containing the number of object portions (e.g., address blocks) represented by the plurality of nodes 320 in the linked list 360; and
    • a usage range indicator 348 designating the range of usage counts for object portions (e.g., address blocks) within the respective tier.


The plurality of tiers within tiered data structure 252 are arranged such that the lowest tier contains object portions (e.g., address blocks) having the lowest “usage rate,” and the highest tier contains object portions (e.g., address blocks) having the highest “usage rate.” Tier assignment module 249 is configured to assign each tier a range of usage rates (e.g., indicated via usage range indicator 348) corresponding to the usage rates of the object portions (e.g., address blocks) eligible for assignment to that tier. For example, all object portions (e.g., address blocks) with a usage rate of 1-3 are assigned to Tier 0 (the lowest tier), all object portions (e.g., address blocks) with a usage rate of 4-6 are assigned to Tier 1, etc.


In some embodiments, tier assignment module 249 is configured to determine a “usage rate” or a usage history value for an object portion (e.g., address block) by, for example, combining the write operations count 318 for the object portion with the read operations count 322 for the object portion. Tier assignment module 249 is configured to assign a respective object portion (e.g., a respective address block) to a tier based on the usage rate of the respective object portion and the usage range indicators 348 assigned to the plurality of tiers. For example, if the respective address block has a usage rate of 2, tier assignment module 249 will assign the respective address block to tier 0, where the usage range of tier 0 is 1-3. If the usage rate for a respective address block increases above the usage range assigned to a particular tier to which the block is currently assigned, then tier assignment module 249, for example, re-assigns the respective address block to a higher tier. Similarly, if the usage rate for a respective address block decreases below the usage range assigned to a particular tier to which the block is currently assigned, then tier assignment module 249, for example, re-assigns the respective address block to a lower tier.


In some embodiments, the object portions (e.g., address blocks) assigned to a respective tier are represented by a linked list 360 of nodes 320. Each node 320 of the plurality of nodes 320 stores usage history for a respective object portion (e.g., a respective address block). In some implementations, the head of the linked list 360 is associated with a node 320 corresponding to the most recently updated object portion (e.g., address block) in the respective tier, and the end of the linked list 360 is associated with a node 320 corresponding to the least recently updated object portion (e.g., address block) in the respective tier.


Referring to FIG. 4, in some embodiments, data for accelerated portions of the objects (e.g., address blocks and/or sub-blocks) is stored in a persistent cache 118, which is implemented as a “log-structured cache.” That is, the persistent cache 118 (sometimes herein called “the cache,” for ease of discussion) provides caching services to a plurality of system components (e.g., multiple client systems or virtual machines), while being structured as a log, with data and metadata being sequentially written to the cache storage device. In this manner, persistent cache 118 operates as a circular buffer. The advantages of operating persistent cache 118 as a circular buffer include simplicity (particularly reduced internal metadata management requirements), automatic wear leveling of memory locations within persistent cache 118, and automatic garbage collection, which essentially eliminates the risk of stale data in the persistent cache 118 crowding out (e.g., preventing storage of) more current data in persistent cache 118. In some embodiments, a single log-structured persistent cache 118 is used to store cached data for multiple virtual machines (e.g., all virtual machines executed by a respective server system 110), thereby eliminating the need to separately manage the caching of data for each of the virtual machines.


In some embodiments, persistent cache 118 includes clean region 440, dirty region 450 and unused region 470. Both clean region 440 and dirty region 450 store cached data, while unused region 470 is an “empty” portion of persistent cache 118 that is ready to be overwritten with new data. Any of the clean, dirty and unused regions can be wrapped over the end boundary of persistent cache 118. In FIG. 4, for example, clean regions 440-1 and 440-2 are logically a single clean region 440 that is wrapped over the end boundary of persistent cache 118. Due to persistent cache 118 operating as a circular buffer, as will be explained below, the boundaries of the clean, dirty and unused regions 440, 450, 470 are adjusted as data is added to persistent cache 118 and as data is removed from the persistent cache 118. Additional information regarding the functioning of a log-structured cache as illustrated in FIG. 4 is found in U.S. Patent Application Publication No. 2011/0320733, which is hereby incorporated by reference in its entirety.


In some implementations, data stored in the log-structured cache (persistent cache 118) includes data corresponding to both write and read caches. Accordingly, the write and read caches share a circular buffer, and, in some implementations, write and read data are intermingled in the log-structured cache. In other implementations, the write and read caches are maintained separately in separate circular buffers, either in the same persistent cache 118, or in separate instances of persistent cache 118.


In some implementations, dirty region 450 contains both read and write cached data. In some implementations, write data in dirty region 450 is data stored in persistent cache 118, but not yet flushed to secondary storage system 130 (e.g., any of secondary storage subsystems 150 in FIG. 1). Write data stored in persistent cache 118 (whether in clean region 440 or dirty region 450) is said to be “accelerated.” The beginning of dirty region 450 is represented by flush pointer 414, and the end of dirty region 450 is represented by write pointer 416. As cache write data is flushed to secondary storage system 130, flush pointer 414 is advanced (e.g., incremented) so that it points to the next segment of write data not yet flushed to secondary storage. Similarly, write pointer 416 is advanced as new blocks of write data are stored to persistent cache 118, thereby “moving” a portion of persistent cache 118 that was formerly in unused region 470 into dirty region 450. Clean region 440, which is bounded at its beginning by clean pointer 412 and at its end by flush pointer 414, stores cached data that is also stored in secondary storage. Typically, clean region 440 stores both cached read data and write data, both of which are said to be “accelerated.”


Unused region 470, bounded at its beginning by write pointer 416 and at its end by clean pointer 412, represents an “empty” portion of persistent cache 118 that is ready to be overwritten with new data. In some implementations, unused region 470 corresponds to flash memory regions that have been erased in preparation for storing “new” blocks or sub-blocks of data that have been selected for acceleration, as well as updated data for blocks or sub-blocks already stored in persistent cache 118.


Both clean region 440 and dirty region 450 store cached data (data for accelerated address block and/or sub-blocks) for multiple virtual machines (e.g., all the virtual machines executed by a respective server system 110). The cached data for the various virtual machines is interleaved within the clean region 440 and dirty region 450 and ordered within the clean region 440 and dirty region 450 in the same order (sometimes called log order) that the data was written to persistent cache 118.


Cache address map 410 maps accelerated portions of the objects (e.g., address sub-blocks) to specific locations in persistent cache 118. Cache address map 410 is used, when reading cached data from persistent cache 118, to locate requested data in persistent cache 118. In some embodiments, cache address map 410 is apportioned on a sub-block by sub-block basis. Cache management driver 420 handles both data read and write commands directed to persistent cache 118.



FIGS. 5A-5B illustrate a flow diagram of a method 500 for accelerating data read operations performed by a computer system (e.g., server system 110 in FIG. 1) having one or more processors, memory and a persistent cache for storing accelerated data. In some embodiments, method 500 is governed by a set of instructions stored in memory (e.g., a non-transitory computer readable storage media) that are executed by the one or more processors of the computer system.


The computer system receives (502) data read commands from two or more clients to read data from objects identified by the data read commands. FIG. 1, for example, shows a respective server system 110 (sometimes herein called a computer system) configured to receive data reads commands from two or more virtual machines (sometimes herein called clients 114) executed on respective server system 110 to read data from objects (e.g., one or more virtual disks) identified by the data read commands. In various embodiments, the number of virtual machines from which a respective server system 110 receives data read commands is more than 20, more than 50, or more than 100.


The computer system processes (504) the data read commands to update usage history information for portions of the objects identified by the data read commands. FIG. 2, for examples, shows access history update module 245 (a component of server system 110) configured to process the data read commands to update usage history information in access history database 250 for portions of the objects (e.g., address blocks or sub-blocks) identified by the data read commands. For example, access history update module 245 is configured to update object portion usage metadata 254 within access history database 250 for portions (e.g., address sub-blocks) of the objects identified by the data read commands.


Referring to FIG. 3A, for example, access history update module 245 is configured to increment a read operations count 322 and update an MRU marker 324 within access history data base 250 for each respective node of the plurality of nodes 320 corresponding to the portions of the objects (e.g., the address blocks) identified by the data read commands. In this example, access history update module 245 is further configured to increment one or more subset read counts 330 within the nodes of access history data base 250 corresponding to the one or more portions of the object(s) (e.g., one or more sub-blocks of the address block) identified by the data read commands.


In some implementations, access history update module 245 is also configured to decrement a read operations count 322 or write operations count 318 within access history data base 250 for each respective node of a plurality of nodes 320 corresponding to a portion of the objects (e.g., an address block) not identified by any data access command received within a predefined period of time, or alternatively not identified by any data access command of a predefined number of data access commands received by the computer system from the clients. Stated another way, in some implementations, the read operations count 322 or write operations count 318 for object portions not recently accessed are decremented, where “not recently accessed” is automatically determined either periodically or each time a predefined number of data access commands have been received. Furthermore, as described in more detail below, the tiers to which those object portions are assigned and the object portions to mark for acceleration are re-evaluated in accordance with the decremented read operations count 322.


The computer system determines (506) whether a respective portion of the objects identified by a data read command from a respective client of the two or more clients is stored in the persistent cache. FIG. 2, for example, shows memory access decision module 246 (a component of server system 110) configured to determine whether a respective portion of the objects (e.g., an address sub-block) identified by a data read command from a respective client (e.g., virtual machine 112a) of the two or more clients (e.g., virtual machines 111a to 112v in FIG. 1) is stored in persistent cache 118. In some embodiments, memory access decision module 246 module makes such a determination based on cache address map 410.


In accordance with a determination that the respective portion of the objects identified by the data read command from the respective client is stored in the persistent cache, the computer system returns (508) the respective portion of the objects from the persistent cache to the respective client of the two or more clients. FIG. 2, for example, shows server system 110 configured to return a respective object portion (e.g., address sub-block) from persistent cache 118 to the respective client (e.g., virtual machine 112a) in accordance with the previous determination by memory access decision module 246 that the respective portion is stored in persistent cache 118.


In accordance with a determination that the respective portion of the objects identified by the data read command from the respective client is not stored in the persistent cache, the computer system automatically identifies and marks (510) for acceleration the respective portion of the objects identified by the data read command from the respective client if the respective portion of the objects satisfies an access based data acceleration policy, in accordance with the usage history information. The automatic identification and marking of object portions is performed collectively for the two or more clients. FIG. 2, for example, shows acceleration determination module 248 (a component of server system 110) configured to automatically identify and mark for read acceleration a respective object portion (e.g., a respective address sub-block identified by a data read command from a respective client, such as virtual machine 112v) if the object portion satisfies an access based data acceleration policy, in accordance with the usage history information. In this example, acceleration determination module 248 is configured to perform these operations in accordance with a previous determination by memory access decision module 246 that the respective object portion identified by the data read command from the respective client is not stored in persistent cache 118 (e.g., cache address map 410 indicates that the respective portion is not present in persistent cache 118). In some embodiments, an object portion (e.g., the respective address sub-block identified by the data read command from the respective client) satisfies the data access acceleration policy and is marked for acceleration when (A) the node for the respective object portion is assigned to a tier that is above the “low-water mark” for acceleration (as described in more detail below), and furthermore the subset read count 330 corresponding to the respective portion indicates a value of 2 or “enabled.”


In accordance with a determination that the respective portion of the objects is not marked for acceleration, the computer system processes (512) the data read command from the respective client, by: reading (514) from the secondary storage the respective portion of the objects; and returning (516) the respective portion of the objects read from the secondary storage to the respective client of the two or more clients. FIG. 2, for example, shows server system 110 configured to process the data read command from the respective client (e.g., virtual machine 112a) by reading the respective portion (e.g., a respective address sub-block) from secondary storage 150 (e.g., within secondary storage system 130 in FIG. 1) via network communications module 244. Server system 110, as shown in FIG. 2, is further configured to return the respective portion (e.g., the respective address sub-block) read from secondary storage 150 to the respective client (e.g., virtual machine 112a). In this example, server system 110 is configured to perform these operations in accordance with the previous determination by acceleration determination module 248 that the respective portion (e.g., the respective address sub-block) of the objects is not marked for acceleration. In some embodiments, an object portion (e.g., the respective address sub-block) identified by the data read command from a client is not marked for acceleration when either (A) the subset read count 330 corresponding to the object portion indicates a value of 0 or 1 or “disabled,” or (B) the address block that includes the object portion has a usage rate (e.g., the sum of the write operation count 318 and read operation count 322 for the address block, see FIG. 3A) that does not meet the minimum usage rate that qualifies address blocks for acceleration.


In accordance with a determination that the respective portion of the objects is marked for acceleration, the computer system processes (518) the data read command from the respective client, by: (520) reading from the secondary storage the respective portion of the objects; writing (522) the respective portion of the objects to the persistent cache; and returning (524) the respective portion of the objects to the respective client of the two or more clients. FIG. 2, for example, shows server system 110 configured to process the data read command from the respective client (e.g., virtual machine 112a) by reading an object portion (e.g., a respective address sub-block identified by the data read command from the respective client, e.g., virtual machine 112a) from secondary storage 150 (e.g., within secondary storage system 130 in FIG. 1) via network communications module 244. FIG. 2 shows server system 110 further configured to write the object portion (e.g., the respective address sub-block) read from secondary storage 150 to persistent cache 118. FIG. 2, for example, further shows server system 110 configured to return the object portion (e.g., the respective address sub-block) to the respective client (e.g., virtual machine 112a). In this example, server system 110 is configured to perform these operations in accordance with the previous determination by acceleration determination module 248 that the object portion (e.g., the respective address sub-block identified by the data read command from the respective client) is marked for acceleration.


In some embodiments, returning the respective portion of the objects comprises returning (526) the respective portion of the objects read from the secondary storage to the respective client of the two or more clients. For example, in some implementations, the object portion read from secondary storage is written to an intermediate buffer (not shown in the figures), and then copied from the intermediate buffer to a memory location at which the requesting client receives the object; and furthermore the same data (object portion) is copied from the intermediate buffer to the persistent cache (522). In some implementations, to minimize latency, the requested object portion is returned to the requesting client prior to the same data being written to persistent cache 118. FIG. 2, for examples, shows server system 110 configured to return the object portion (e.g., the respective address sub-block) from secondary storage 150 to the respective client (e.g., virtual machine 112a) of the two or more clients.


Alternatively, in some embodiments, returning the respective portion of the objects comprises returning (528) the respective portion of the objects from the persistent cache to the respective client of the two or more clients. FIG. 2, for example, shows server system 110 configured to return the object portion (e.g., the respective address sub-block) from persistent cache 118 to the respective client (e.g., virtual machine 112a) of the two or more clients.



FIG. 6 illustrates a flow diagram of a method 600 for accelerating data write operations performed by a computer system (e.g., server system 110 in FIG. 1) having one or more processors, memory and a persistent cache for storing accelerated data. In some embodiments, method 600 is governed by a set of instructions stored in memory (e.g., a non-transitory computer readable storage media) that are executed by the one or more processors of the computer system.


The computer system receives (602) data write commands from two or more clients to write data to objects identified by the data write commands. FIG. 1, for example, shows a respective server system 110 configured to receive data write commands from two or more virtual machines executed on respective server system 110 to write data to objects (e.g., one or more virtual disks) identified by the data write commands.


The computer system processes (604) the data write commands to update usage history information for portions of the objects identified by the data write commands. FIG. 2, for example, shows access history update module 245 configured to process the data write commands to update usage history information in access history database 250 for portions of the objects (e.g., address blocks) identified by the data write commands. For example, access history update module 245 is configured to update address portion usage metadata 254 within access history database 250 for portions (e.g., address blocks) of the objects identified by the data write commands. Referring to FIG. 3A, for example, access history update module 245 is configured to increment a write operations count 318 and update an MRU marker 324 within access history data base 250 for each respective node of the plurality of nodes 320 corresponding to the object portions (e.g., address blocks) identified by the data write commands.


Furthermore, as discussed above, in some implementations, access history update module 245 is also configured to decrement a read operations count 322 or write operations count 318 within access history data base 250 for each respective node of a plurality of nodes 320 corresponding to a portion of the objects (e.g., an address block) not identified by any data access command received within a predefined period of time, or alternatively not identified by any data access command of a predefined number of data access commands received by the computer system from the clients.


In accordance with the access history information, the computer system automatically identifies and marks (606) for acceleration a respective portion of the objects identified by a data write command from a respective client of the two or more clients satisfying an access based data acceleration policy. FIG. 2, for example, shows acceleration determination module 248 configured to automatically identify and mark for write acceleration a respective object portion (e.g., a respective address block identified by a data write command from a respective client, such as virtual machine 112v) satisfying an access based data acceleration policy, in accordance with the usage history information. In some embodiments, the respective object portion (e.g., the respective address block identified by the data write command from the respective client) satisfies the access based data acceleration policy when a write acceleration flag 326 for the node associated with the respective portion (e.g., the respective address block) is enabled (e.g., indicates a flag value of 1). In some embodiments, a write acceleration flag 326 for the node corresponding to an object portion (e.g., address block) is enabled when the node is assigned to a tier within tiered data structure 252 that qualifies the corresponding object portions for acceleration.


In some embodiments, only a predefined number of object portions (e.g., address blocks) qualify for acceleration based on the storage capacity of the persistent cache 118. In some embodiments, acceleration determination module 248 is configured to identify a “low-water mark” for object portions (e.g., address blocks) that meet the minimum usage rate that “qualifies for acceleration.” For example, in a server system 110 comprising a persistent cache with storage capacity to accelerate data from 800,000 address blocks, acceleration determination module 248 is configured to identify the tiers with the highest usage rates whose total number of address blocks is, in the aggregate, no more than 800,000. The ‘low-water mark” identifies a respective object portion (e.g., address block), or, set of object portions (e.g., a set of address blocks), in a respective tier (e.g., tier Y, for ease of reference) having the minimum usage rate included in the 800,000 accelerated object portions. Acceleration determination module 248 is configured to mark all object portions (e.g., address blocks) corresponding to nodes in tier Y for write acceleration (e.g., the respective write acceleration flags 326 for all address blocks in tier Y are enabled). In some implementations, one or more object portions (e.g., one or more address blocks) in tier Y below the ‘low-water mark” are also marked for write acceleration. Furthermore, acceleration module 248 marks all portions of the objects (e.g., all address blocks) in tiers above tier Y (i.e., all tiers having usage rate ranges higher than tier Y) for write acceleration (e.g., the respective write acceleration flags 326 for all address blocks in these tiers are enabled). It is noted that the “low water mark” qualifying a portion of the objects (e.g., an address block) for acceleration changes over time, due to fluctuations in the concentration of data read and data write commands.


In accordance with a determination that the respective portion of the objects is marked for acceleration, the computer system writes (608) the respective portion of the objects to the persistent cache and subsequently or concurrently writes the respective portion of the objects to the secondary storage. If a write-back caching methodology is used, the source of the data subsequently written to secondary storage is persistent storage and the write to secondary storage occurs at a later time (e.g., when more room is needed in unused region 470 (FIG. 4) for writing new data to persistent cache 118). If a write-through caching methodology is used, the source of the data written to secondary storage is either the client (which concurrently sends the data to persistent storage and secondary storage) or persistent storage. FIG. 2, for example, shows server system 110 configured to write an object portion (e.g., an address block identified by the data write command from the respective client, such as virtual machine 112a) to the persistent cache 118. FIG. 2, for examples, further shows server system 110 configured to subsequently write the object portion (e.g., an address block identified by the data write command from the respective client, such as virtual machine 112a) to secondary storage 150 (e.g., within secondary storage system 130 in FIG. 1). In this example, server system 110 is configured to perform these operations in accordance with the previous determination by acceleration determination module 248 that the object portion block is marked for acceleration.


In some implementations, server system 110 is configured to implement a write-through methodology whereby, for example, the respective object portion (e.g., the respective address block) identified by the data write command from the respective client is concurrently written to persistent cache 118 and secondary storage 150. In some implementations, server system 110 is configured to implement a write-back methodology whereby, for example, the respective object portion (e.g., the respective address block) identified by the data write command from the respective client is written to persistent cache 118 and subsequently written to secondary storage after some delay. In this example, persistent cache 118 is configured as a circular buffer (also called a log-structured cache) as discussed above with respect to FIG. 4, whereby the respective object portion (e.g., the respective address block) is written to secondary storage 150 when the respective object portion stored in dirty region 450 is subsequently flushed to secondary storage 150. The respective object portion is then retained in clean region 440 of persistent cache 118 until that portion of clean region 440 is reclaimed for inclusion in unused region 470.


In accordance with a determination that the respective portion of the objects is not marked for acceleration, the computer system writes (610) the respective portion of the objects to the secondary storage. FIG. 2, for example, shows server system 110 configured to write a respective object portion (e.g., a respective address block) identified by the data write command from a respective client (e.g., virtual machine 112a) to secondary storage 150 via network communications module 244 in accordance with a previous determination by acceleration determination module 248 that the respective object portion (e.g., the respective address block) is not marked for acceleration.



FIGS. 7A-7B illustrate a flow diagram of a method 700 for accelerating data access performed by a computer system (e.g., server system 110 in FIG. 1). The computer system includes (702) one or more processors, memory and a persistent cache for storing accelerated data. In some embodiments, method 700 is governed by a set of instructions stored in memory (e.g., a non-transitory computer readable storage media) that are executed by the one or more processors of the computer system.


The persistent cache is shared (704) by the two or more clients. FIG. 1, for example, shows server system 110m configured to share persistent cache 118m between virtual machines 112a-112v. For example, each virtual machine executed on server system 110m is a client. In some embodiments, the persistent cache comprises (706) non-volatile solid state storage, such as flash memory, or any of the other examples of non-volatile storage provided above.


The computer system receives (708) data access commands from two or more clients to access data in objects identified by the data access commands. FIG. 1, for example, shows a respective server system 110 configured to receive data access commands from two or more clients executed on respective server system 110 to access data objects identified by the data access commands. In some embodiments, the two or more clients comprise (710) virtual machines executed by the computer system. FIG. 1, for example, shows server system 110m executing a plurality of virtual machines (e.g., virtual machines 112a-112v). In some embodiments, the objects comprise (712) one or more virtual disks. FIG. 1, for examples, shows secondary storage system 130 comprising a plurality of secondary storage subsystems (e.g., secondary storages 150a-150n). In this example, each secondary storage subsystem 150 comprises one or more virtual disks containing data accessible to the two or more clients.


The computer system processes (714) the data access commands from the two or more clients to update access history information for portions of the objects identified by the data access commands from the two or more clients, where the access history information is based on the data access commands of the two or more clients. FIG. 2, for example, shows access history update module 245 configured to process the data access commands to update access history information in access history database 250 for portions of the objects (e.g., address blocks or sub-blocks) identified by the data read commands. For example, access history update module 245 is configured to update a write operations count 318, a read operations count 322 and an MRU marker 324 within access history database 250 corresponding to each portion of the objects (e.g., each address block) identified by the data access commands. In some embodiments, access history update module 245 is further configured to update one or more subset read counts 330 within access history database 250 corresponding to one or more subsets of the portions of the objects (e.g., one or more address sub-blocks) identified by the data access commands when the data access commands comprise data read commands.


Furthermore, as described above, in some embodiments access history update module 245 is also configured to decrement a write operations count 318 or a read operations count 322 within access history database 250 corresponding to each of the portions of the objects (e.g., address block) not identified by the data access commands. In some embodiments, access history update module 245 is configured to update one or more of the aforementioned data structures within access history database 250 either upon receiving data access commands or based on predefined criteria (e.g., upon receiving a predefined number of data access commands or after a predefined period of time).


In some embodiments, the portions of the objects comprise (716) respective blocks or sub-blocks of the objects. For example, a portion of the objects is a block (sometimes called an address block, as discussed above), and the block comprises a plurality of sub-blocks. In another example, a portion of the objects is a sub-block. In some embodiments, the access history information includes (718) at least one usage history value for a respective block and at least one distinct respective usage history value for each sub-block of the respective block. FIG. 3A, for example, shows items of information for a respective node corresponding to an object portion (e.g., the portion of a virtual disk, or other object, called a block or address block). In this example, the items of information include a write operations count 318 for the node corresponding to the object portion (e.g., the address block), a read operations count 322 for the respective node corresponding to the object portion (e.g., the address block) and one or more of subset read counts 330 for the respective node corresponding to each of one or more of subsets of the object portion (e.g., one or more address sub-blocks of the address block).


In some embodiments, the access history information includes (720) at least one operations count for each respective object portion of a plurality of object portions identified by the data access commands. FIG. 3A, for example, shows a write operations count 318 and a read operations count 322 for a respective node corresponding to a portion of the objects (e.g., an address block of a virtual disk). As will be understood to one skilled in the art, in this example, access history database 250 similarly includes items of information for all other nodes 320, which correspond to other object portions (e.g., remaining address blocks) identified by data access commands received from the two or more clients.


In some embodiments, processing the data access commands to update access history information comprises assigning (724) the object portions identified by the data access commands to tiers in accordance with the data acceleration policy and the access history information. FIG. 2, for examples shows, tier assignment module 249 configured to assign (or reassign) object portions (e.g., address blocks) to tiers in accordance with the data acceleration policy and the access history information. For example, as discussed above with reference to FIGS. 3B and 6, tier assignment module 249 is configured to determine respective usage rates for the object portions (e.g., address blocks) identified by the data access commands. Tier assignment module 249 is further configured to assign the object portions to tiers based on the respective usage rates for the portions of the objects and the usage range indicators 348 assigned to the plurality of tiers.


In some embodiments, updating the access history information comprises (726): incrementing a usage history value for a respective portion of the objects assigned to a respective tier for which recent usage criteria are satisfied; and decrementing a usage history value for other portions of the objects assigned to the respective tier for which recent usage criteria are not satisfied. Recent usage criteria are satisfied when a respective portion of the objects (e.g., a respective address block) is identified in one or more current data access commands, or MRU marker 324 corresponding to the respective portion of the objects (e.g., the respective address block) indicates that the respective portion of the objects has been accessed within a predefined period of time, or, alternatively, less than a predefined number of data access commands have been received since the respective portion of the objects was last identified in any of the data access commands received by the server system. Recent usage criteria are not satisfied when a respective portion of the objects (e.g., a respective address block) is not identified in the one or more current data access commands and the respective portion of the objects has not been identified in any data access commands for a predefined period of time or, alternatively, a predefined number of data access commands have been received by the server system 110 since the respective portion of the objects was last identified in any data access commands.


As discussed above with respect to FIG. 3B, in some embodiments, tier assignment module 249 is configured to determine a “usage rate” or a usage history value for a respective portion of the objects (e.g., a respective address block) by, for example, combining the write operations count 318 for the respective portion of the objects with the read operations count 322 for the respective portion of the objects.


In accordance with the access history information, the computer system automatically identifies and marks (728) for acceleration portions of the objects identified by the data access commands that satisfy an access based data acceleration policy, where the automatically identifying and marking are performed collectively for the two or more clients. FIG. 2, for example, shows acceleration determination module 248 configured to identify and mark for acceleration portions of the objects (e.g., address blocks or sub-blocks) identified by the data read command from the respective client satisfying an access based data acceleration policy, in accordance with the usage history information. The above discussion of FIG. 5A provides a detailed description of marking portions of the objects (e.g., address sub-blocks) for read acceleration, and the above discussion of FIG. 6 provides a detailed description of marking portions of the objects (e.g., address blocks) for write acceleration. In some embodiments, read acceleration is determined on a sub-block by sub-block basis, and write acceleration is determined on a block-by-block basis.


In some embodiments, identifying and marking for acceleration portions of the objects comprises identifying and marking (730) for acceleration portions of the objects in accordance with the tiers to which the portions of the objects have been assigned. The above discussion of FIG. 6 provides a detailed description of identifying and marking for write acceleration portions of the objects (e.g., address blocks) in accordance with the tiers to which the portions of the objects are assigned. For example, only the object portions (e.g., address blocks) having nodes 320 within tiers (see FIGS. 3A, 3B) that are at or above a “low-water mark” are identified by acceleration determination module 248 as being qualified for acceleration and are marked for acceleration.


The computer system accelerates (732) the object portions marked for acceleration, by accelerating data access, including either or both accelerating data writes and data reads of the object portions to and from the persistent cache. FIG. 2, for example, shows server system 110 configured to accelerate portions of objects (e.g., address blocks or sub-blocks) marked for acceleration by acceleration determination module 248. The above discussion of FIGS. 5A-5B and 6 provides a detailed description of read and write operations, respectively, performed after marking portions of the objects for acceleration.


Although the terms “first,” “second,” etc. have been used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, which changing the meaning of the description, so long as all occurrences of the “first contact” are renamed consistently and all occurrences of the second contact are renamed consistently. The first contact and the second contact are both contacts, but they are not the same contact.


The terminology used above is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of the embodiments and the appended claims, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.


The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit any claimed invention to the precise forms disclosed. Many modifications and variations are possible in view of the descriptions and examples provided above. The embodiments were chosen and described in order to best explain the principles of the invention and its practical applications, to thereby enable others skilled in the art to best utilize the various embodiments with various modifications as are appropriate to particular uses and requirements.

Claims
  • 1. A method for accelerating data access, performed by a computer system having one or more processors, memory, a tiered data structure stored in the memory, the tiered data structure comprising a plurality of tiers, and a persistent cache for storing accelerated data, the method comprising: receiving, at the computer system, data access commands from two or more clients to access data in objects identified by the data access commands;processing, by the computer system, the data access commands to update access history information for respective object portions of the objects identified by the data access commands from the two or more clients, wherein the processing includes: in accordance with the access history information, determining usage rates for each of the respective object portions identified by the data access commands, andassigning each of the respective object portions identified by the data access commands to one of the plurality of tiers within the tiered data structure in accordance with a determination that a respective determined usage rate for a respective object portion is within a range of usage rates for a respective tier to which the respective object portion is assigned;
  • 2. The method of claim 1, wherein the persistent cache comprises non-volatile solid state storage.
  • 3. The method of claim 1, wherein the two or more clients comprise virtual machines executed by the computer system.
  • 4. The method of claim 1, wherein the objects comprise one or more virtual disks.
  • 5. The method of claim 1, wherein the respective object portions comprise respective address blocks or sub-blocks of the objects.
  • 6. The method of claim 5, wherein the access history information includes at least one usage history value for a respective address block and at least one distinct respective usage history value for each sub-block of the respective address block.
  • 7. The method of claim 1, wherein the access history information includes at least one operations count for each respective object portion of a plurality of object portions identified by the data access commands.
  • 8. The method of claim 1, wherein updating the access history information comprises: incrementing a usage history value for a respective object portion assigned to a respective tier for which recent usage criteria are satisfied; anddecrementing a usage history value for other portions of objects assigned to the respective tier for which recent usage criteria are not satisfied.
  • 9. A computer system, comprising: one or more processors;a persistent cache for storing accelerated data;memory storing a tiered data structure, the tiered data structure comprising a plurality of tiers, and one or more programs for execution by the one or more processors, wherein the one or more programs include instructions that when executed by the one or more processors cause the computer system to:receive, at the computer system, data access commands from two or more clients to access data in objects identified by the data access commands;process, by the computer system, the data access commands to update access history information for respective object portions of the objects identified by the data access commands from the two or more clients, wherein the instructions for processing the data access commands include instructions that when executed by the one or more processors cause the computer system to: in accordance with the access history information, determine usage rates for each of the respective object portions identified by the data access commands, andassign each of the respective object portions identified by the data access commands to one of the plurality of tiers within the tiered data structure in accordance with a determination that a respective determined usage rate for a respective object portion is within a range of usage rates for a respective tier to which the respective object portion is assigned;
  • 10. The computer system of claim 9, wherein the two or more clients comprise virtual machines executed by the computer system.
  • 11. The computer system of claim 9, wherein the respective object portions comprise respective address blocks or sub-blocks of the objects.
  • 12. The computer system of claim 11, wherein the access history information includes at least one usage history value for a respective address block and at least one distinct respective usage history value for each sub-block of the respective address block.
  • 13. The computer system of claim 9, wherein the access history information includes at least one operations count for each respective object portion of a plurality of object portions identified by the data access commands.
  • 14. A non-transitory computer readable medium storing one or more programs that when executed by one or more processors of a computer system cause the computer system to: receive, at the computer system, data access commands from two or more clients to access data in objects identified by the data access commands;process, by the computer system, the data access commands to update access history information for respective object portions of the objects identified by the data access commands from the two or more clients, wherein causing the computer system to process the data access commands includes causing the computer system to: in accordance with the access history information, determine usage rates for each of the respective object portions identified by the data access commands, andassign each of the respective object portions identified by the data access commands to one of the plurality of tiers within the tiered data structure in accordance with a determination that a respective determined usage rate for a respective object portion is within a range of usage rates for a respective tier to which the respective object portion is assigned;
  • 15. The non-transitory computer readable medium of claim 14, wherein the two or more clients comprise virtual machines executed by the computer system.
  • 16. The non-transitory computer readable medium of claim 14, wherein the portions of the objects comprise respective address blocks or sub-blocks of the objects.
  • 17. The non-transitory computer readable medium of claim 16, wherein the access history information includes at least one usage history value for a respective address block and at least one distinct respective usage history value for each sub-block of the respective address block.
  • 18. The non-transitory computer readable medium of claim 14, wherein the access history information includes at least one operations count for each respective object portion of a plurality of object portions identified by the data access commands.
  • 19. The non-transitory computer readable medium of claim 14, wherein: processing the data access commands to update access history information comprises assigning the portions of the objects identified by the data access commands to tiers in accordance with the data acceleration policy and the access history information; andidentifying and marking for acceleration portions of the objects comprises identifying and marking for acceleration portions of the objects in accordance with the tiers to which the portions of the objects have been assigned.
  • 20. The method of claim 1, wherein automatically identifying and marking for acceleration at least some of the respective object portions identified by the data access commands further comprises determining a threshold tier in the tiered data structure, and marking for acceleration object portions assigned to the threshold tier and all tiers higher in the tiered data structure than the threshold tier.
  • 21. The method of claim 20, wherein all object portions assigned to the threshold tier and all tiers higher in the tiered data structure than the threshold tier cumulatively comprise a total number of address blocks that is less than or equal to the storage capacity of the persistent cache.
  • 22. The method of claim 1, further including: processing a data write command to write an identified object portion, the data write command received from a respective client of the two or more clients, by: automatically marking for acceleration the identified object portion if the identified object portion is assigned to a tier at or above a threshold tier of the plurality of tiers;in accordance with a determination that the identified object portion is marked for acceleration, writing the identified object portion to the persistent cache and subsequently or concurrently writing the identified object portion to a secondary storage; andin accordance with a determination that the identified object portion is not marked for acceleration, writing the identified object portion to the secondary storage.
  • 23. The computer system of claim 9, wherein the instructions that when executed by the one or more processors cause the computer system to automatically identify and mark for acceleration portions of the objects identified by the data access commands further comprise instructions that when executed by the one or more processors cause the computer system to determine a threshold tier in the tiered data structure and mark for acceleration object portions assigned to the threshold tier and all tiers higher in the tiered data structure than the threshold tier.
  • 24. The method of claim 1, wherein the at least some of the respective object portions marked for acceleration include only a predefined number of the respective object portions, and the predefined number is determined based on a storage capacity of the persistent cache.
  • 25. The method of claim 1, wherein marking a respective object portion for acceleration includes updating information accessible via the tiered data structure to indicate that the respective object portion is marked for acceleration.
RELATED APPLICATIONS

This application claims priority to U.S. Provisional Patent Application No. 61/684,646, filed Aug. 17, 2012, which is hereby incorporated by reference in its entirety.

US Referenced Citations (531)
Number Name Date Kind
4173737 Skerlos et al. Nov 1979 A
4888750 Kryder et al. Dec 1989 A
4916652 Schwarz et al. Apr 1990 A
5129089 Nielsen Jul 1992 A
5270979 Harari et al. Dec 1993 A
5329491 Brown et al. Jul 1994 A
5381528 Brunelle Jan 1995 A
5404485 Ban Apr 1995 A
5488702 Byers et al. Jan 1996 A
5519847 Fandrich et al. May 1996 A
5530705 Malone, Sr. Jun 1996 A
5537555 Landry et al. Jul 1996 A
5551003 Mattson et al. Aug 1996 A
5636342 Jeffries Jun 1997 A
5657332 Auclair et al. Aug 1997 A
5666114 Brodie et al. Sep 1997 A
5708849 Coke et al. Jan 1998 A
5765185 Lambrache et al. Jun 1998 A
5890193 Chevallier Mar 1999 A
5930188 Roohparvar Jul 1999 A
5936884 Hasbun et al. Aug 1999 A
5943692 Marberg et al. Aug 1999 A
5946714 Miyauchi Aug 1999 A
5982664 Watanabe Nov 1999 A
6000006 Bruce et al. Dec 1999 A
6006345 Berry, Jr. Dec 1999 A
6016560 Wada et al. Jan 2000 A
6018304 Bessios Jan 2000 A
6044472 Crohas Mar 2000 A
6070074 Perahia et al. May 2000 A
6104304 Clark et al. Aug 2000 A
6119250 Nishimura et al. Sep 2000 A
6138261 Wilcoxson et al. Oct 2000 A
6182264 Ott Jan 2001 B1
6192092 Dizon et al. Feb 2001 B1
6260120 Blumenau et al. Jul 2001 B1
6295592 Jeddeloh Sep 2001 B1
6311263 Barlow et al. Oct 2001 B1
6408394 Vander Kamp et al. Jun 2002 B1
6412042 Paterson et al. Jun 2002 B1
6442076 Roohparvar Aug 2002 B1
6449625 Wang Sep 2002 B1
6484224 Robins et al. Nov 2002 B1
6516437 Van Stralen et al. Feb 2003 B1
6564285 Mills et al. May 2003 B1
6647387 McKean et al. Nov 2003 B1
6675258 Bramhall et al. Jan 2004 B1
6678788 O'Connell Jan 2004 B1
6728879 Atkinson Apr 2004 B1
6757768 Potter et al. Jun 2004 B1
6775792 Ulrich et al. Aug 2004 B2
6810440 Micalizzi, Jr. et al. Oct 2004 B2
6836808 Bunce et al. Dec 2004 B2
6836815 Purcell et al. Dec 2004 B1
6842436 Moeller Jan 2005 B2
6865650 Morley Mar 2005 B1
6871257 Conley et al. Mar 2005 B2
6895464 Chow et al. May 2005 B2
6934755 Saulpaugh et al. Aug 2005 B1
6966006 Pacheco et al. Nov 2005 B2
6978343 Ichiriu Dec 2005 B1
6980985 Amer-Yahia et al. Dec 2005 B1
6981205 Fukushima et al. Dec 2005 B2
6988171 Beardsley et al. Jan 2006 B2
7020017 Chen et al. Mar 2006 B2
7024514 Mukaida et al. Apr 2006 B2
7028165 Roth et al. Apr 2006 B2
7032123 Kane et al. Apr 2006 B2
7043505 Teague et al. May 2006 B1
7076598 Wang Jul 2006 B2
7100002 Shrader Aug 2006 B2
7102860 Wenzel Sep 2006 B2
7111293 Hersh et al. Sep 2006 B1
7126873 See et al. Oct 2006 B2
7133282 Sone Nov 2006 B2
7155579 Neils et al. Dec 2006 B1
7162678 Saliba Jan 2007 B2
7173852 Gorobets et al. Feb 2007 B2
7184446 Rashid et al. Feb 2007 B2
7212440 Gorobets May 2007 B2
7269755 Moshayedi et al. Sep 2007 B2
7275170 Suzuki Sep 2007 B2
7295479 Yoon et al. Nov 2007 B2
7328377 Lewis et al. Feb 2008 B1
7426633 Thompson et al. Sep 2008 B2
7486561 Mokhlesi Feb 2009 B2
7516292 Kimura et al. Apr 2009 B2
7523157 Aguilar, Jr. et al. Apr 2009 B2
7527466 Simmons May 2009 B2
7529466 Takahashi May 2009 B2
7533214 Aasheim et al. May 2009 B2
7546478 Kubo et al. Jun 2009 B2
7566987 Black et al. Jul 2009 B2
7571277 Mizushima Aug 2009 B2
7574554 Tanaka et al. Aug 2009 B2
7596643 Merry, Jr. et al. Sep 2009 B2
7669003 Sinclair et al. Feb 2010 B2
7681106 Jarrar et al. Mar 2010 B2
7685494 Varnica et al. Mar 2010 B1
7707481 Kirschner et al. Apr 2010 B2
7761655 Mizushima et al. Jul 2010 B2
7765454 Passint Jul 2010 B2
7774390 Shin Aug 2010 B2
7809836 Mihm et al. Oct 2010 B2
7840762 Oh et al. Nov 2010 B2
7870326 Shin et al. Jan 2011 B2
7890818 Kong et al. Feb 2011 B2
7913022 Baxter Mar 2011 B1
7925960 Ho et al. Apr 2011 B2
7934052 Prins et al. Apr 2011 B2
7945825 Cohen et al. May 2011 B2
7954041 Hong et al. May 2011 B2
7971112 Murata Jun 2011 B2
7974368 Shieh et al. Jul 2011 B2
7978516 Olbrich et al. Jul 2011 B2
7996642 Smith Aug 2011 B1
8006161 Lestable et al. Aug 2011 B2
8032724 Smith Oct 2011 B1
8041884 Chang Oct 2011 B2
8042011 Nicolaidis et al. Oct 2011 B2
8069390 Lin Nov 2011 B2
8190967 Hong et al. May 2012 B2
8250380 Guyot Aug 2012 B2
8254181 Hwang et al. Aug 2012 B2
8259506 Sommer et al. Sep 2012 B1
8261020 Krishnaprasad et al. Sep 2012 B2
8312349 Reche et al. Nov 2012 B2
8385117 Sakurada et al. Feb 2013 B2
8412985 Bowers et al. Apr 2013 B1
8429436 Fillingim et al. Apr 2013 B2
8438459 Cho et al. May 2013 B2
8453022 Katz May 2013 B2
8473680 Pruthi Jun 2013 B1
8510499 Banerjee Aug 2013 B1
8531888 Chilappagari et al. Sep 2013 B2
8554984 Yano et al. Oct 2013 B2
8627117 Johnston Jan 2014 B2
8634248 Sprouse et al. Jan 2014 B1
8694854 Dar et al. Apr 2014 B1
8700842 Dinker Apr 2014 B2
8724789 Altberg et al. May 2014 B2
8775741 de la Iglesia Jul 2014 B1
8788778 Boyle Jul 2014 B1
8832384 de la Iglesia Sep 2014 B1
8849825 McHugh et al. Sep 2014 B1
8874992 Desireddi et al. Oct 2014 B2
8885434 Kumar Nov 2014 B2
8898373 Kang et al. Nov 2014 B1
8909894 Singh et al. Dec 2014 B1
8910030 Goel Dec 2014 B2
8923066 Subramanian et al. Dec 2014 B1
8928681 Edmondson et al. Jan 2015 B1
9002805 Barber et al. Apr 2015 B1
9043517 Sprouse et al. May 2015 B1
9052942 Barber et al. Jun 2015 B1
9063946 Barber et al. Jun 2015 B1
9110843 Chiu et al. Aug 2015 B2
9128690 Lotzenburger et al. Sep 2015 B2
9329789 Chu et al. May 2016 B1
9355060 Barber et al. May 2016 B1
9417917 Barber et al. Aug 2016 B1
20010026949 Ogawa et al. Oct 2001 A1
20010050824 Buch Dec 2001 A1
20020024846 Kawahara et al. Feb 2002 A1
20020032891 Yada et al. Mar 2002 A1
20020036515 Eldridge et al. Mar 2002 A1
20020083299 Van Huben et al. Jun 2002 A1
20020099904 Conley Jul 2002 A1
20020116651 Beckert et al. Aug 2002 A1
20020122334 Lee et al. Sep 2002 A1
20020152305 Jackson et al. Oct 2002 A1
20020162075 Talagala et al. Oct 2002 A1
20020165896 Kim Nov 2002 A1
20030041299 Kanazawa et al. Feb 2003 A1
20030043829 Rashid et al. Mar 2003 A1
20030079172 Yamagishi et al. Apr 2003 A1
20030088805 Majni et al. May 2003 A1
20030093628 Matter et al. May 2003 A1
20030163594 Aasheim et al. Aug 2003 A1
20030163629 Conley et al. Aug 2003 A1
20030188045 Jacobson Oct 2003 A1
20030189856 Cho et al. Oct 2003 A1
20030198100 Matsushita et al. Oct 2003 A1
20030204341 Guliani et al. Oct 2003 A1
20030212719 Yasuda et al. Nov 2003 A1
20030225961 Chow et al. Dec 2003 A1
20040024957 Lin et al. Feb 2004 A1
20040024963 Talagala et al. Feb 2004 A1
20040057575 Zhang et al. Mar 2004 A1
20040062157 Kawabe Apr 2004 A1
20040073829 Olarig Apr 2004 A1
20040085849 Myoung et al. May 2004 A1
20040114265 Talbert Jun 2004 A1
20040143710 Walmsley Jul 2004 A1
20040148561 Shen et al. Jul 2004 A1
20040153902 Machado et al. Aug 2004 A1
20040158775 Shibuya et al. Aug 2004 A1
20040167898 Margolus et al. Aug 2004 A1
20040181734 Saliba Sep 2004 A1
20040199714 Estakhri et al. Oct 2004 A1
20040210706 In et al. Oct 2004 A1
20040237018 Riley Nov 2004 A1
20050060456 Shrader et al. Mar 2005 A1
20050060501 Shrader Mar 2005 A1
20050073884 Gonzalez et al. Apr 2005 A1
20050108588 Yuan May 2005 A1
20050114587 Chou et al. May 2005 A1
20050138442 Keller, Jr. et al. Jun 2005 A1
20050144358 Conley et al. Jun 2005 A1
20050144361 Gonzalez et al. Jun 2005 A1
20050144367 Sinclair Jun 2005 A1
20050144516 Gonzalez et al. Jun 2005 A1
20050154825 Fair Jul 2005 A1
20050172065 Keays Aug 2005 A1
20050172207 Radke et al. Aug 2005 A1
20050193161 Lee et al. Sep 2005 A1
20050201148 Chen et al. Sep 2005 A1
20050210348 Totsuka Sep 2005 A1
20050231765 So et al. Oct 2005 A1
20050249013 Janzen et al. Nov 2005 A1
20050251617 Sinclair et al. Nov 2005 A1
20050257120 Gorobets et al. Nov 2005 A1
20050273560 Hulbert et al. Dec 2005 A1
20050281088 Ishidoshiro et al. Dec 2005 A1
20050289314 Adusumilli et al. Dec 2005 A1
20060010174 Nguyen et al. Jan 2006 A1
20060039196 Gorobets et al. Feb 2006 A1
20060039227 Lai et al. Feb 2006 A1
20060053246 Lee Mar 2006 A1
20060062054 Hamilton et al. Mar 2006 A1
20060069932 Oshikawa et al. Mar 2006 A1
20060085671 Majni et al. Apr 2006 A1
20060087893 Nishihara et al. Apr 2006 A1
20060103480 Moon et al. May 2006 A1
20060107181 Dave et al. May 2006 A1
20060136570 Pandya Jun 2006 A1
20060136655 Gorobets et al. Jun 2006 A1
20060136681 Jain et al. Jun 2006 A1
20060156177 Kottapalli et al. Jul 2006 A1
20060184738 Bridges et al. Aug 2006 A1
20060195650 Su et al. Aug 2006 A1
20060209592 Li et al. Sep 2006 A1
20060224841 Terai et al. Oct 2006 A1
20060244049 Yaoi et al. Nov 2006 A1
20060259528 Dussud et al. Nov 2006 A1
20060265568 Burton Nov 2006 A1
20060291301 Ziegelmayer Dec 2006 A1
20070011413 Nonaka et al. Jan 2007 A1
20070033376 Sinclair et al. Feb 2007 A1
20070058446 Hwang et al. Mar 2007 A1
20070061597 Holtzman et al. Mar 2007 A1
20070076479 Kim et al. Apr 2007 A1
20070081408 Kwon et al. Apr 2007 A1
20070083697 Birrell et al. Apr 2007 A1
20070088716 Brumme et al. Apr 2007 A1
20070091677 Lasser et al. Apr 2007 A1
20070101096 Gorobets May 2007 A1
20070106679 Perrin et al. May 2007 A1
20070113019 Beukema et al. May 2007 A1
20070133312 Roohparvar Jun 2007 A1
20070147113 Mokhlesi et al. Jun 2007 A1
20070150790 Gross et al. Jun 2007 A1
20070156842 Vermeulen et al. Jul 2007 A1
20070157064 Falik et al. Jul 2007 A1
20070174579 Shin Jul 2007 A1
20070180188 Fujibayashi et al. Aug 2007 A1
20070180346 Murin Aug 2007 A1
20070191993 Wyatt Aug 2007 A1
20070201274 Yu et al. Aug 2007 A1
20070204128 Lee et al. Aug 2007 A1
20070208901 Purcell et al. Sep 2007 A1
20070234143 Kim Oct 2007 A1
20070245061 Harriman Oct 2007 A1
20070245099 Gray et al. Oct 2007 A1
20070263442 Cornwell et al. Nov 2007 A1
20070268754 Lee et al. Nov 2007 A1
20070277036 Chamberlain et al. Nov 2007 A1
20070279988 Nguyen Dec 2007 A1
20070291556 Kamei Dec 2007 A1
20070294496 Goss et al. Dec 2007 A1
20070300130 Gorobets Dec 2007 A1
20080013390 Zipprich-Rasch Jan 2008 A1
20080019182 Yanagidaira et al. Jan 2008 A1
20080022163 Tanaka et al. Jan 2008 A1
20080028275 Chen et al. Jan 2008 A1
20080043871 Latouche et al. Feb 2008 A1
20080052446 Lasser et al. Feb 2008 A1
20080056005 Aritome Mar 2008 A1
20080059602 Matsuda et al. Mar 2008 A1
20080071971 Kim et al. Mar 2008 A1
20080077841 Gonzalez et al. Mar 2008 A1
20080077937 Shin et al. Mar 2008 A1
20080086677 Yang et al. Apr 2008 A1
20080112226 Mokhlesi May 2008 A1
20080141043 Flynn et al. Jun 2008 A1
20080144371 Yeh et al. Jun 2008 A1
20080147714 Breternitz et al. Jun 2008 A1
20080147964 Chow et al. Jun 2008 A1
20080147998 Jeong Jun 2008 A1
20080148124 Zhang et al. Jun 2008 A1
20080163030 Lee Jul 2008 A1
20080168191 Biran et al. Jul 2008 A1
20080168319 Lee et al. Jul 2008 A1
20080170460 Oh et al. Jul 2008 A1
20080180084 Dougherty et al. Jul 2008 A1
20080209282 Lee et al. Aug 2008 A1
20080229000 Kim Sep 2008 A1
20080229003 Mizushima et al. Sep 2008 A1
20080229176 Arnez et al. Sep 2008 A1
20080270680 Chang Oct 2008 A1
20080282128 Lee et al. Nov 2008 A1
20080285351 Shlick et al. Nov 2008 A1
20080313132 Hao et al. Dec 2008 A1
20080320110 Pathak Dec 2008 A1
20090003046 Nirschl et al. Jan 2009 A1
20090003058 Kang Jan 2009 A1
20090019216 Yamada et al. Jan 2009 A1
20090031083 Willis Jan 2009 A1
20090037652 Yu et al. Feb 2009 A1
20090070608 Kobayashi Mar 2009 A1
20090116283 Ha et al. May 2009 A1
20090125671 Flynn et al. May 2009 A1
20090144598 Yoon et al. Jun 2009 A1
20090158288 Fulton et al. Jun 2009 A1
20090168525 Olbrich et al. Jul 2009 A1
20090172258 Olbrich et al. Jul 2009 A1
20090172259 Prins et al. Jul 2009 A1
20090172260 Olbrich et al. Jul 2009 A1
20090172261 Prins et al. Jul 2009 A1
20090172262 Olbrich et al. Jul 2009 A1
20090172308 Prins et al. Jul 2009 A1
20090172335 Kulkarni et al. Jul 2009 A1
20090172499 Olbrich et al. Jul 2009 A1
20090193058 Reid Jul 2009 A1
20090204823 Giordano et al. Aug 2009 A1
20090207660 Hwang et al. Aug 2009 A1
20090213649 Takahashi et al. Aug 2009 A1
20090222708 Yamaga Sep 2009 A1
20090228761 Perlmutter et al. Sep 2009 A1
20090235128 Eun et al. Sep 2009 A1
20090249160 Gao et al. Oct 2009 A1
20090251962 Yun et al. Oct 2009 A1
20090268521 Ueno et al. Oct 2009 A1
20090292972 Seol et al. Nov 2009 A1
20090296466 Kim et al. Dec 2009 A1
20090296486 Kim et al. Dec 2009 A1
20090310422 Edahiro et al. Dec 2009 A1
20090319864 Shrader Dec 2009 A1
20100002506 Cho et al. Jan 2010 A1
20100008175 Sweere et al. Jan 2010 A1
20100011261 Cagno et al. Jan 2010 A1
20100020620 Kim et al. Jan 2010 A1
20100037012 Yano et al. Feb 2010 A1
20100054034 Furuta et al. Mar 2010 A1
20100061151 Miwa et al. Mar 2010 A1
20100091535 Sommer et al. Apr 2010 A1
20100103737 Park Apr 2010 A1
20100110798 Hoei et al. May 2010 A1
20100115206 de la Iglesia et al. May 2010 A1
20100118608 Song et al. May 2010 A1
20100138592 Cheon Jun 2010 A1
20100153616 Garratt Jun 2010 A1
20100161936 Royer et al. Jun 2010 A1
20100174959 No et al. Jul 2010 A1
20100185807 Meng et al. Jul 2010 A1
20100199027 Pucheral et al. Aug 2010 A1
20100199125 Reche Aug 2010 A1
20100199138 Rho Aug 2010 A1
20100202196 Lee et al. Aug 2010 A1
20100202239 Moshayedi et al. Aug 2010 A1
20100208521 Kim et al. Aug 2010 A1
20100257379 Wang et al. Oct 2010 A1
20100262889 Bains Oct 2010 A1
20100281207 Miller et al. Nov 2010 A1
20100281342 Chang et al. Nov 2010 A1
20100306222 Freedman et al. Dec 2010 A1
20100332858 Trantham et al. Dec 2010 A1
20100332863 Johnston Dec 2010 A1
20110010514 Benhase Jan 2011 A1
20110022779 Lund et al. Jan 2011 A1
20110022819 Post et al. Jan 2011 A1
20110051513 Shen et al. Mar 2011 A1
20110066597 Mashtizadeh et al. Mar 2011 A1
20110066806 Chhugani et al. Mar 2011 A1
20110072207 Jin et al. Mar 2011 A1
20110072302 Sartore Mar 2011 A1
20110078407 Lewis Mar 2011 A1
20110078496 Jeddeloh Mar 2011 A1
20110083060 Sakurada et al. Apr 2011 A1
20110099460 Dusija et al. Apr 2011 A1
20110113281 Zhang et al. May 2011 A1
20110122691 Sprouse May 2011 A1
20110131444 Buch et al. Jun 2011 A1
20110138260 Savin Jun 2011 A1
20110173378 Filor et al. Jul 2011 A1
20110179249 Hsiao Jul 2011 A1
20110199825 Han et al. Aug 2011 A1
20110205823 Hemink et al. Aug 2011 A1
20110213920 Frost et al. Sep 2011 A1
20110222342 Yoon et al. Sep 2011 A1
20110225346 Goss et al. Sep 2011 A1
20110225347 Goss et al. Sep 2011 A1
20110228601 Olbrich et al. Sep 2011 A1
20110231600 Tanaka et al. Sep 2011 A1
20110239077 Bai et al. Sep 2011 A1
20110264843 Haines et al. Oct 2011 A1
20110271040 Kamizono Nov 2011 A1
20110283119 Szu et al. Nov 2011 A1
20110289125 Guthery Nov 2011 A1
20110320733 Sanford Dec 2011 A1
20120011393 Roberts et al. Jan 2012 A1
20120017053 Yang et al. Jan 2012 A1
20120023144 Rub Jan 2012 A1
20120026799 Lee Feb 2012 A1
20120054414 Tsai et al. Mar 2012 A1
20120063234 Shiga et al. Mar 2012 A1
20120072639 Goss et al. Mar 2012 A1
20120096217 Son et al. Apr 2012 A1
20120110250 Sabbag et al. May 2012 A1
20120117317 Sheffler May 2012 A1
20120117397 Kolvick et al. May 2012 A1
20120124273 Goss et al. May 2012 A1
20120131286 Faith et al. May 2012 A1
20120151124 Baek et al. Jun 2012 A1
20120151253 Horn Jun 2012 A1
20120151294 Yoo et al. Jun 2012 A1
20120173797 Shen Jul 2012 A1
20120173826 Takaku Jul 2012 A1
20120185750 Hayami Jul 2012 A1
20120195126 Roohparvar Aug 2012 A1
20120203804 Burka et al. Aug 2012 A1
20120203951 Wood et al. Aug 2012 A1
20120210095 Nellans et al. Aug 2012 A1
20120216079 Fai et al. Aug 2012 A1
20120233391 Frost et al. Sep 2012 A1
20120236658 Byom et al. Sep 2012 A1
20120239858 Melik-Martirosian Sep 2012 A1
20120239868 Ryan et al. Sep 2012 A1
20120239976 Cometti et al. Sep 2012 A1
20120246204 Nalla et al. Sep 2012 A1
20120259863 Bodwin et al. Oct 2012 A1
20120275466 Bhadra et al. Nov 2012 A1
20120278564 Goss et al. Nov 2012 A1
20120284574 Avila et al. Nov 2012 A1
20120284587 Yu et al. Nov 2012 A1
20120297122 Gorobets Nov 2012 A1
20130007343 Rub et al. Jan 2013 A1
20130007381 Palmer Jan 2013 A1
20130031438 Hu et al. Jan 2013 A1
20130036418 Yadappanavar et al. Feb 2013 A1
20130038380 Cordero et al. Feb 2013 A1
20130047045 Hu et al. Feb 2013 A1
20130058145 Yu et al. Mar 2013 A1
20130070527 Sabbag et al. Mar 2013 A1
20130073784 Ng et al. Mar 2013 A1
20130073798 Kang et al. Mar 2013 A1
20130073924 D'Abreu et al. Mar 2013 A1
20130079942 Smola et al. Mar 2013 A1
20130103978 Akutsu Apr 2013 A1
20130110891 Ogasawara et al. May 2013 A1
20130111279 Jeon et al. May 2013 A1
20130111298 Seroff et al. May 2013 A1
20130117606 Anholt et al. May 2013 A1
20130121084 Jeon et al. May 2013 A1
20130124792 Melik-Martirosian et al. May 2013 A1
20130124888 Tanaka et al. May 2013 A1
20130128666 Avila et al. May 2013 A1
20130132647 Melik-Martirosian May 2013 A1
20130132652 Wood et al. May 2013 A1
20130159609 Haas et al. Jun 2013 A1
20130176784 Cometti et al. Jul 2013 A1
20130179646 Okubo et al. Jul 2013 A1
20130191601 Peterson et al. Jul 2013 A1
20130194865 Bandic et al. Aug 2013 A1
20130194874 Mu et al. Aug 2013 A1
20130232289 Zhong et al. Sep 2013 A1
20130238576 Binkert et al. Sep 2013 A1
20130254498 Adachi et al. Sep 2013 A1
20130254507 Islam et al. Sep 2013 A1
20130258738 Barkon et al. Oct 2013 A1
20130265838 Li Oct 2013 A1
20130282955 Parker et al. Oct 2013 A1
20130290611 Biederman et al. Oct 2013 A1
20130297613 Yu Nov 2013 A1
20130301373 Tam Nov 2013 A1
20130304980 Nachimuthu et al. Nov 2013 A1
20130314988 Desireddi et al. Nov 2013 A1
20130343131 Wu et al. Dec 2013 A1
20130346672 Sengupta Dec 2013 A1
20140013027 Jannyavula Venkata et al. Jan 2014 A1
20140013188 Wu et al. Jan 2014 A1
20140025864 Zhang et al. Jan 2014 A1
20140032890 Lee et al. Jan 2014 A1
20140063905 Ahn et al. Mar 2014 A1
20140067761 Chakrabarti et al. Mar 2014 A1
20140071761 Sharon et al. Mar 2014 A1
20140075133 Li et al. Mar 2014 A1
20140082261 Cohen et al. Mar 2014 A1
20140082310 Nakajima Mar 2014 A1
20140082456 Li et al. Mar 2014 A1
20140082459 Li et al. Mar 2014 A1
20140095775 Talagala et al. Apr 2014 A1
20140101389 Nellans et al. Apr 2014 A1
20140115238 Xi et al. Apr 2014 A1
20140122818 Hayasaka et al. May 2014 A1
20140122907 Johnston May 2014 A1
20140136762 Li et al. May 2014 A1
20140136883 Cohen May 2014 A1
20140136927 Li et al. May 2014 A1
20140143505 Sim et al. May 2014 A1
20140153333 Avila et al. Jun 2014 A1
20140157065 Ong Jun 2014 A1
20140173224 Fleischer et al. Jun 2014 A1
20140181458 Loh et al. Jun 2014 A1
20140201596 Baum et al. Jul 2014 A1
20140223084 Lee et al. Aug 2014 A1
20140244578 Winkelstraeter Aug 2014 A1
20140258755 Stenfort Sep 2014 A1
20140269090 Flynn et al. Sep 2014 A1
20140279909 Sudarsanam et al. Sep 2014 A1
20140310494 Higgins et al. Oct 2014 A1
20140359044 Davis et al. Dec 2014 A1
20140359381 Takeuchi et al. Dec 2014 A1
20150023097 Khoueir et al. Jan 2015 A1
20150032967 Udayashankar et al. Jan 2015 A1
20150037624 Thompson et al. Feb 2015 A1
20150153799 Lucas et al. Jun 2015 A1
20150153802 Lucas et al. Jun 2015 A1
20150212943 Yang et al. Jul 2015 A1
20150268879 Chu Sep 2015 A1
20150286438 Simionescu et al. Oct 2015 A1
Foreign Referenced Citations (17)
Number Date Country
1 299 800 Apr 2003 EP
1465203 Oct 2004 EP
1 990 921 Nov 2008 EP
2 386 958 Nov 2011 EP
2 620 946 Jul 2013 EP
2002-532806 Oct 2002 JP
WO 2007036834 Apr 2007 WO
WO 2007080586 Jul 2007 WO
WO 2008075292 Jun 2008 WO
WO 2008121553 Oct 2008 WO
WO 2008121577 Oct 2008 WO
WO 2009028281 Mar 2009 WO
WO 2009032945 Mar 2009 WO
WO 2009058140 May 2009 WO
WO 2009084724 Jul 2009 WO
WO 2009134576 Nov 2009 WO
WO 2011024015 Mar 2011 WO
Non-Patent Literature Citations (67)
Entry
International Search Report and Written Opinion dated Jul. 25, 2014, received in International Patent Application No. PCT/US2014/029453, which corresponds to U.S. Appl. No. 13/963,444, 9 pages (Frayer).
International Search Report and Written Opinion dated Mar. 7, 2014, received in International Patent Application No. PCT/US2013/074772, which corresponds to U.S. Appl. No. 13/831,218, 10 pages (George).
International Search Report and Written Opinion dated Mar. 24, 2014, received in International Patent Application No. PCT/US2013/074777, which corresponds to U.S. Appl. No. 13/831,308, 10 pages (George).
International Search Report and Written Opinion dated Mar. 7, 2014, received in International Patent Application No. PCT/US2013/074779, which corresponds to U.S. Appl. No. 13/831,374, 8 pages (George).
Invitation to Pay Additional Fees dated Feb. 13, 2015, received in International Patent Application No. PCT/US2014/063949, which corresponds to U.S. Appl. No. 14/135,433, 6 pages (Delpapa).
International Search Report and Written Opinion dated Jan. 21, 2015, received in International Application No. PCT/US2014/059748, which corresponds to U.S. Appl. No. 14/137,511, 13 pages (Dancho).
International Search Report and Written Opinion dated Feb. 18, 2015, received in International Application No. PCT/US2014/066921, which corresponds to U.S. Appl. No. 14/135,260, 13 pages (Fitzpatrick).
Ashkenazi et al., “Platform independent overall security architecture in multi-processor system-on-chip integrated circuits for use in mobile phones and handheld devices,” ScienceDirect, Computers and Electrical Engineering 33 (2007), 18 pages.
Lee et al., “A Semi-Preemptive Garbage Collector for Solid State Drives,” Apr. 2011, IEEE, pp. 12-21.
Office Action dated Feb. 17, 2015, received in Chinese Patent Application No. 201210334987.1, which corresponds to U.S. Appl. No. 12/082,207, 9 pages (Prins).
International Search Report and Written Opinion dated May 4, 2015, received in International Patent Application No. PCT/US2014/065987, which corresponds to U.S. Appl. No. 14/135,400, 12 pages (George).
International Search Report and Written Opinion dated Mar. 17, 2015, received in International Patent Application No. PCT/US2014/067467, which corresponds to U.S. Appl. No. 14/135,420, 13 pages (Lucas).
International Search Report and Written Opinion dated Apr. 20, 2015, received in International Patent Application No. PCT/US2014/063949, which corresponds to U.S. Appl. No. 14/135,433, 21 pages (Delpapa).
International Search Report and Written Opinion dated Mar. 9, 2015, received in International Patent Application No. PCT/US2014/059747, which corresponds to U.S. Appl. No. 14/137,440, 9 pages (Fitzpatrick).
Bayer, “Prefix B-Trees”, ip.com Journal, ip.com Inc., West Henrietta, NY, Mar. 30, 2007, 29 pages.
Bhattacharjee et al., “Efficient Index Compression in DB2 LUW”, IBM Research Report, Jun. 23, 2009, http://domino.research.ibm.com/library/cyberdig.nsf/papers/40B2C45876D0D747852575E100620CE7/$File/rc24815.pdf, 13 pages.
Oracle, “Oracle9i: Database Concepts”, Jul. 2001, http://docs.oracle.com/cd/A91202—01/901—doc/server.901/a88856.pdf, 49 pages.
International Search Report and Written Opinion dated Jun. 8, 2015, received in International Patent Application No. PCT/US2015/018252, which corresponds to U.S. Appl. No. 14/339,072, 9 pages (Busch).
International Search Report and Written Opinion dated Jun. 2, 2015, received in International Patent Application No. PCT/US2015/018255, which corresponds to U.S. Appl. No. 14/336,967, 14 pages (Chander).
International Search Report and Written Opinion dated Jun. 30, 2015, received in International Patent Application No. PCT/US2015/023927, which corresponds to U.S. Appl. No. 14/454,687, 11 pages (Kadayam).
Office Action dated Dec. 8, 2014, received in Chinese Patent Application No. 201180021660.2, which corresponds to U.S. Appl. No. 12/726,200, 7 pages (Olbrich).
Office Action dated Jul. 31, 2015, received in Chinese Patent Application No. 201180021660.2, which corresponds to U.S. Appl. No. 12/726,200, 9 pages (Olbrich).
International Search Report and Written Opinion dated Jul. 23, 2015, received in International Patent Application No. PCT/US2015/030850, which corresponds to U.S. Appl. No. 14/298,843, 12 pages (Ellis).
IBM Research-Zurich, “The Fundamental Limit of Flash Random Write Performance: Understanding, Analysis and Performance Modeling,” Mar. 31, 2010, pp. 1-15.
International Search Report and Written Opinion dated Sep. 14, 2015, received in International Patent Application No. PCT/US2015/036807, which corresponds to U.S. Appl. No. 14/311,152, 9 pages (Higgins).
Gasior, “Gigabyte's i-Ram storage device, Ram disk without the fuss,” The Tech Report, p. 1, Jan. 25, 2006, 5 pages.
Oestreicher et al., “Object Lifetimes in Java Card,” 1999, USENIX, 10 pages.
International Preliminary Report on Patentability dated May 24, 2016, received in International Patent Application No. PCT/US2014/065987, which corresponds to U.S. Appl. No. 14/135,400, 9 pages (George).
Office Action dated Apr. 25, 2016, received in Chinese Patent Application No. 201280066282.4, which corresponds to U.S. Appl. No. 13/602,047, 8 pages (Tai).
International Preliminary Report on Patentability dated Dec. 6, 2016, received in International Patent Application No. PCT/US2015/030850, which corresponds to U.S. Appl. No. 14/298,843, 8 pages (Ellis).
International Preliminary Report on Patentability dated Dec. 20, 2016, received in International Patent Application No. PCT/US2015/036807, which corresponds to U.S. Appl. No. 14/311,152, 6 pages (Higgins).
Canim, Buffered Bloom ilters on Solid State Storage, ADMS*10, Singapore, Sep. 13-17, 2010, 8 pgs.
Lu, A Forest-structured Bloom Filter with Flash Memory, MSST 2011, Denver, CO, May 23-27, 2011, article, 6 pgs.
Lu, A Forest-structured Bloom Filter with Flash Memory, MSST 2011, Denver, CO, May 23-27, 2011, presentation slides, 25 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/059447, Jun. 6, 2013, 12 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/059453, Jun. 6, 2013, 12 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/065914, May, 23, 2013, 7 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/065919, Jun. 17, 2013, 8 pgs.
SanDisk Enterprise IP LLC, Notification of the Decision to Grant a Patent Right for Patent for Invention, CN 200880127623.8, Jul. 4, 2013, 1 pg.
Barr, Introduction to Watchdot Timers, Oct. 2001, 3 pgs.
Kang, A Multi-Channel Architecture for High-Performance NAND Flash-Based Storage System, J. Syst. Archit., 53, Sep. 9, 2007, 15 pgs.
Kim, A Space-Efficient Flash Translation Layer for CompactFlash Systems, May 2002, 10 pgs.
McLean, Information Technology—AT Attachment with Packet Interface Extension, Aug. 19, 1998, 339 pgs.
Park, A High Performance Controller for NAND Flash-Based Solid State Disk (NSSD), Feb. 12-16, 2006, 4 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88133, Mar. 19, 2009, 7 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88136, Mar. 19, 2009, 7 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88146, Feb. 26, 2009, 10 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88154, Feb. 27, 2009, 9 pgs.
Pliant Technology, Written Opinion, PCT/US08/88164, Feb. 13, 2009, 6 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88206, Feb. 18, 2009, 8 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88217, Feb. 19, 2009, 7 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88229, Feb. 13, 2009, 8 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88232, Feb. 19, 2009, 8 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US08/88236, Feb. 19, 2009, 7 pgs.
Pliant Technology, International Search Report / Written Opinion, PCT/US2011/028637, Oct. 27, 2011, 11 pgs.
Pliant Technology, Supplementary ESR, 08866997.3, Feb. 23, 2012, 6 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/042771, Mar. 4, 2013, 14 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/065916, Apr. 5, 2013, 7 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/042764, Aug. 31, 2012, 12 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/042775, Sep. 26, 2012, 9 pgs.
SanDisk Enterprise IP LLC, International Search Report / Written Opinion, PCT/US2012/059459, Feb. 14, 2013, 9 pgs.
SanDisk Enterprise IP LLC, Office Action, CN 200880127623.8, Apr. 18, 2012, 12 pgs.
SanDisk Enterprise IP LLC, Office Action, CN 200880127623.8, Dec. 31, 2012, 9 pgs.
SanDisk Enterprise IP LLC, Office Action, JP 2010-540863, Jul. 24, 2012, 3 pgs.
Watchdog Timer and Power Savin Modes, Microchip Technology Inc., 2005.
Zeidman, 1999 Verilog Designer's Library, 9 pgs.
Office Action dated Jan. 11, 2017, received in Chinese Patent Application No. 201280066282.4, which corresponds to U.S. Appl. No. 13/602,047, 3 pages (Tai).
Provisional Applications (1)
Number Date Country
61684646 Aug 2012 US