Compaction of information in tiered data structure

Information

  • Patent Grant
  • 9626400
  • Patent Number
    9,626,400
  • Date Filed
    Monday, July 21, 2014
    9 years ago
  • Date Issued
    Tuesday, April 18, 2017
    7 years ago
Abstract
A computer system detects a request to access a first data object stored in a tiered data structure, that includes internal nodes and leaf nodes, where data objects in the leaf nodes include unique key information and corresponding values, and the first data object is uniquely identified by a first key. In response to detecting the request to access the first data object, the computer system retrieves a leaf node that includes the first data object and identifies the first data object in the leaf node by combining unique key information of the first data object with a key prefix that is stored separately in the leaf node to generate a combined key and determining that the combined key matches the first key that uniquely identifies the first data object. After identifying the first data object, the computer system provides access to the first data object.
Description
TECHNICAL FIELD

The disclosed embodiments relate generally to memory systems, and in particular, to improving the performance and efficiency of tiered data structures.


BACKGROUND

The speed of many computer operations is frequently constrained by the speed and efficiency with which data can be stored and retrieved from data structures associated with the device. Many conventional data structures take a long time to store and retrieve data. However, tiered data structures can be used to dramatically improve the speed and efficiency of data storage. Some tiered data structures enable data searches, data insertions, data deletions and sequential data access to be performed in logarithmic time. However, further improvements to tiered data structures can further increase the speed and efficiency with which data can be stored and retrieved, thereby improving the performance of computers relying on such tiered data structures.


SUMMARY

Various implementations of systems, methods and devices within the scope of the appended claims each have several aspects, no single one of which is solely responsible for the attributes described herein. Without limiting the scope of the appended claims, after considering this disclosure, and particularly after considering the section entitled “Detailed Description” one will understand how the aspects of various implementations are used to improve the performance and efficiency of tiered data structures.





BRIEF DESCRIPTION OF THE DRAWINGS

So that the present disclosure can be understood in greater detail, a more particular description may be had by reference to the features of various implementations, some of which are illustrated in the appended drawings. The appended drawings, however, merely illustrate the more pertinent features of the present disclosure and are therefore not to be considered limiting, for the description may admit to other effective features.



FIG. 1 is a block diagram illustrating an implementation of a data storage system, in accordance with some embodiments.



FIG. 2 is a block diagram illustrating an implementation of a computer system, in accordance with some embodiments.



FIGS. 3A-3F illustrate an example of a tiered data structure and example operations performed with the example tiered data structure, in accordance with some embodiments.



FIGS. 4A-4E illustrate a method of efficient cache utilization in a tiered data structure, in accordance with some embodiments.



FIGS. 5A-5C illustrate a method of performing conditional updates for reducing frequency of data modification operations (e.g., in a tiered data structure), in accordance with some embodiments.



FIGS. 6A-6D illustrate a method of compaction of information in a tiered data structure, in accordance with some embodiments.





In accordance with common practice the various features illustrated in the drawings may not be drawn to scale. Accordingly, the dimensions of the various features may be arbitrarily expanded or reduced for clarity. In addition, some of the drawings may not depict all of the components of a given system, method or device. Finally, like reference numerals may be used to denote like features throughout the specification and figures.


DETAILED DESCRIPTION

The various implementations described herein include systems, methods and/or devices used to improve the performance and efficiency of tiered data structures. One or more of the various implementations described herein include systems, methods and/or devices for efficient cache utilization in a tiered data structure. One or more of the various implementations described herein include systems, methods and/or devices for performing conditional updates for reducing frequency of data modification operations (e.g., in a tiered data structure). One or more of the various implementations described herein include systems, methods and/or devices for compaction of information in a tiered data structure.


Numerous details are described herein in order to provide a thorough understanding of the example implementations illustrated in the accompanying drawings. However, some embodiments may be practiced without many of the specific details, and the scope of the claims is only limited by those features and aspects specifically recited in the claims. Furthermore, well-known methods, components, and circuits have not been described in exhaustive detail so as not to unnecessarily obscure more pertinent aspects of the implementations described herein.


As described in more detail below, a computer system detects a request to access a first data object stored in a tiered data structure, that includes internal nodes and leaf nodes, where data objects in the leaf nodes include unique key information and corresponding values, and the first data object is uniquely identified by a first key. In response to detecting the request to access the first data object, the computer system retrieves a leaf node that includes the first data object and identifies the first data object in the leaf node by combining unique key information of the first data object with a key prefix that is stored separately in the leaf node to generate a combined key and determining that the combined key matches the first key that uniquely identifies the first data object. After identifying the first data object, the computer system provides access to the first data object.


In some embodiments, the key prefix for the first data object is stored as part of a second data object that is stored before the first data object in predefined order in the leaf node. In some embodiments, the key prefix for the first data object is a predefined portion of a key of a distinct second data object in the leaf node. In some embodiments, the data objects in the leaf node are sorted by key in a predefined key order.


In some embodiments, identifying the first data object includes searching through the leaf node for the first data object by comparing the first key with a plurality of candidate keys for candidate data objects in the leaf node, and a respective candidate key for a respective candidate data object is generated by combining unique key information for the respective candidate data object with a corresponding key prefix for the respective candidate data object to generate the respective candidate key.


In some embodiments, each respective data object of a plurality of the data objects in the leaf node, including the first data object, includes metadata that identifies a location of a key prefix for the key corresponding to the respective data object. In some embodiments, first metadata for the first data object has a first length, and second metadata for a second data object in the plurality of data objects has a second length that is different from the first length. In some embodiments, the leaf node includes a fixed length header for each of the plurality of data objects; and for each of the plurality of data objects, the fixed length header includes information indentifying a format of metadata included in the data object. Furthermore, in some of these embodiments, different data objects in the plurality of data objects have different formats of metadata.


In some embodiments, the leaf node, as stored, is compressed; and after retrieving the leaf node and prior to identifying the first data object in the leaf node, the computer system decompresses the leaf node.


In some embodiments, or in some circumstances, the computer system detects a request to insert a new data object in the tiered data structure, and in response to detecting the request to insert the new data object in the tiered data structure, the computer system identifies a respective leaf node, of the plurality of leaf nodes in the tiered data structure, into which the new data object is to be inserted, and also identifies a position in the respective leaf node that is after a prior data object in the respective leaf node in a predefined order. Furthermore, while responding to the request to insert the new data object in the tiered data structure, the computer system determines a prefix for the key of the respective data object based on a comparison between the key of the respective data object with the key of the prior data object, and inserts the data object into the respective leaf node along with an indication of a location in the leaf node of the prefix for the key of the respective data object.


In some embodiments, or in some circumstances, the computer system detects a request to delete a respective data object in the leaf node that is before a subsequent data object in the leaf node, the respective data object having a key, and in response to detecting the request to delete the respective data object, and in accordance with a determination that the subsequent data object relies on a portion of the key of the respective data object as a key prefix for the subsequent data object, the computer system updates the subsequent data object so that metadata of the subsequent data object does not rely on the portion of the key of the respective data object as the key prefix for the subsequent data object.


In some embodiments, or in some circumstances, the computer system detects a request to update the first data object in the leaf node, and in response to detecting the request to update the first data object, the computer system updates the value of the first data object, which changes a location of the key prefix for the first data object in the leaf node, and updates the unique key information corresponding to the first data object to reflect the change in the location of the key prefix for the first data object.


In some embodiments, the one or more memory devices in which the tiered data structure is stored include one or more three-dimensional (3D) memory devices and circuitry associated with operation of memory elements in the one or more 3D memory devices. Furthermore, in some embodiments, the circuitry and one or more memory elements in a respective 3D memory device, of the one or more 3D memory devices, are on the same substrate (e.g., a silicon substrate).



FIG. 1 is a block diagram illustrating an implementation of a data storage system 101, in accordance with some embodiments. While some example features are illustrated, various other features have not been illustrated for the sake of brevity and so as not to obscure more pertinent aspects of the example implementations disclosed herein. To that end, as a non-limiting example, data storage system 101 includes computer system 102, tiered data structure 104, cache 106, and one or more internal requestors 108 (e.g., processes or applications that are internal to data storage system 101). While internal requestor 108 is shown as separate from computer system 102 in FIG. 1, in some circumstances internal requestor 108 is a processor application that is co-resident with data access processes on the computer system 102. In some embodiments, cache 106 is divided into a data object cache portion 106-1 for storing data objects retrieved from tiered data structure 104 and node cache portion 106-2 for storing nodes retrieved from tiered data structure 104. In some embodiments, there is a separate data object cache 106-1 that is distinct from node cache 106-2. While cache 106 is shown as separate from computer system 102 in FIG. 1, in some circumstances cache 106 is stored in memory of computer system 102.


In some embodiments, tiered data structure 104 is stored in non-volatile memory such as NAND-type flash memory or NOR-type flash memory, magnetic hard disk drives or other persistent storage medium that maintains its state when power is removed. In some embodiments, cache 106 is stored in RAM or other random access memory that is not persistent and does not maintain its state when power is removed. In some embodiments, tiered data structure 104 is divided across a plurality of storage devices. Computer system 102 responds to requests from internal requestors 108 (e.g., other computer systems or components of data storage system 101 that need access to data stored in tiered data structure 104) and/or external requestors 110 by storing, retrieving, and modifying data in tiered data structure 104 and cache 106, as described in greater detail below with reference to FIGS. 4A-4E, 5A-5C, and 6A-6D



FIG. 2 is a block diagram illustrating an implementation of a computer system 102, in accordance with some embodiments. Computer system 102 typically includes one or more processors (also sometimes called CPUs or processing units or microprocessors or microcontrollers) 202 for executing modules, programs and/or instructions stored in memory 206 and thereby performing processing operations, memory 206, and one or more communication buses 208 for interconnecting these components. Communication buses 208 optionally include circuitry (sometimes called a chipset) that interconnects and controls communications between system components. In some embodiments, computer system 102 is coupled to tiered data structure 104, and cache 106 (optionally including data object cache portion 106-1 and node cache portion 106-2) by communication buses 208 and storage interface(s) 210 (e.g., an input output, I/O, interface such as a PCI bus or PCIe bus). In some embodiments, computer system 102 is coupled to internal requestor(s) 108 and/or external requestors 110 by communication buses 208 and requestor interface(s) 212. Memory 206 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM or other random access solid state memory devices, and may include non-volatile memory, such as one or more magnetic disk storage devices, optical disk storage devices, flash memory devices, or other non-volatile solid state storage devices. Memory 206 optionally includes one or more storage devices remotely located from processor(s) 202. Memory 206, or alternately the non-volatile memory device(s) within memory 206, comprises a non-transitory computer readable storage medium. In some embodiments, memory 206, or the computer readable storage medium of memory 206 stores the following programs, modules, and data structures, or a subset thereof:

    • operating logic 220 includes procedures for handling various basic system services and for performing hardware dependent tasks;
    • communications module 222 that is used for communicating with other computer systems or computer components (e.g., via storage interface(s) 210 and requestor interface(s) 212);
    • request module 224 for detecting and processing request received from internal requestors 108 (FIG. 1) and external requestors 110 (FIG. 1);
    • cache module 226 for storing and retrieving information (e.g., data objects and nodes) from cache 106, optionally including:
      • cache storage module 228 for storing information (e.g., data objects and nodes) in cache 106;
      • cache search module 230 for performing searches based on requested information (e.g., a search for a requested data object or retrieving a node for use in searching for a requested data object) in cache 106; and
      • cache eviction policies 232 for determining which information (e.g., data objects and/or nodes) to evict from cache 106;
    • tiered data structure module 234 for storing and retrieving information (e.g., data objects and nodes) within tiered data structure 104, optionally including:
      • tiered data structure storage module 236 for storing information (e.g., new data objects or updated data objects) in leaf nodes of tiered data structure 104 and/or deleting information from tiered data structure 104;
      • tiered data structure search module 238 for searching through tiered data structure 104 for requested data (e.g., one or more data objects requested by a requestor);
      • metadata generator 240 for generating metadata for data objects that is stored in leaf nodes of tiered data structure 104 with the data objects and enables the data objects to be located with tiered data structure search module 238 in response to requests from requestors; and
      • conditional update module 242 for locking portions of tiered data structure 104 while a conditional update operation is being performed so as to improve the efficiency of the conditional update operation;
    • response generator 224 for generating responses to requests from internal and external requestors based on data retrieved in response to the requests; and
    • optionally, one or more internal requestors 108 for requesting data objects from tiered data structure 104 and/or cache 106.


Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures or modules, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory 206 may store a subset of the modules and data structures identified above. Furthermore, memory 206 may store additional modules and data structures not described above. In some embodiments, the programs, modules, and data structures stored in memory 206, or the computer readable storage medium of memory 206, provide instructions for implementing respective operations in the methods described below with reference to FIGS. 4A-4E, 5A-5C, and/or 6A-6D.


Although FIG. 2 shows a computer system 102, FIG. 2 is intended more as a functional description of the various features which may be present in a non-volatile computer system than as a structural schematic of the embodiments described herein. In practice, and as recognized by those of ordinary skill in the art, items shown separately could be combined and some items could be separated.



FIG. 3A illustrates an example of a tiered data structure, in accordance with some embodiments. Tiered data structure 104 includes a plurality of nodes. The plurality of nodes are organized in a tiered structure in which each respective node is connected to one or more other nodes in levels (tiers) above and/or below the respective node. A parent node for a respective node in tiered data structure 104 is a node that is a level (tier) above the respective node in tiered data structure 104 and refers to the respective node. A child node for a respective node in tiered data structure 104 is a node that is a level (tier) below the respective node in tiered data structure 104 and refers to the respective node. Two nodes are at the same level if they have a same number of nodes to traverse to reach root node 302. Root node 302 is an external node that has no parent node, typically there is only one root node for tiered data structure 104. Internal nodes 304 are nodes that have both a parent node and one or more child nodes and are thus internal to the tiered data structure. Leaf nodes 306 are nodes that do not have child nodes and are thus “external” nodes. Root node 302 and internal nodes 304 include references that indicate which child nodes are associated with a particular range of data. For example, root node 302 in FIG. 3A indicates that internal node 304-1 is associated with data with keys between 1 and 136. Internal node 304-1 indicates that: internal node 304-2 is associated with data objects having keys between 1 and 24; internal node 304-3 is associated with data objects having keys between 25 and 66; and internal node 304-4 is associated with data objects having keys between 67 and 136. Similarly, internal node 304-3 indicates that: leaf node 306-2 includes data with keys between 25 and 30; leaf node 306-3 includes data with keys between 31 and 58; and leaf node 306-4 includes data with keys between 59 and 66, and so on.


Navigating the tiered data structure typically, but optionally, relies on the assumption that keys are always sorted in a predefined order (e.g., monotonically ascending), so that a node that is associated with data having keys between a first value and a second value is associated with all data in the tiered data structure that has keys between the first value and the second value. In some embodiments, each leaf node has a maximum size and when the leaf node exceeds the maximum size, the leaf node is split into two leaf nodes. In some embodiments, each leaf node has a minimum size and when a leaf node is below the minimum size, the leaf node is combined with one or more other leaf nodes. In some embodiments, each non-leaf node (e.g., root node or internal node) has a maximum number of child nodes, and when a splitting a leaf node results in a non-leaf node having more than the maximum number of child nodes, the non-leaf node is split to accommodate the extra child nodes. In some embodiments, each non-leaf node (e.g., root node or internal node) has a minimum number of child nodes, and when a combining two or more leaf nodes results in a non-leaf node having less than the minimum number of child nodes, the non-leaf node is combined with one or more other non-leaf nodes to accommodate the reduced number of child nodes. The tiered data structure may additionally conform to some or all of the rules associated with B−Trees, B+Trees, B*Trees or other tiered data structures.



FIG. 3B illustrates an example of efficient cache utilization in a tiered data structure, in accordance with some embodiments. In FIG. 3B, populated cache 310-a is an example of cache 106 from FIGS. 1 and 2 that is populated with one or more data objects and one or more nodes that were retrieved to respond to prior requests for data objects by one or more internal or external requestors. For example, one of the prior requests was a request for data object 58, so computer system 102 traversed through tiered data structure 104 in FIG. 3A by traversing, in sequence, root node 302, internal node 304-1, internal node 304-3 to identify and retrieve leaf node 306-3, which includes data object 58. After retrieving data object 58, data object 58 is cached in data object cache portion 106-1 and the traversed nodes are cached in node cache portion 106-2. In FIG. 3B, the data objects in the populated cache 310 are shown in order of “staleness” where more stale data objects are near the bottom of data object cache portion 106-1 and less stale (e.g., fresher) data objects are near the top of data object cache portion 106-1, as data objects are refreshed, they are reordered in the cache to represent their staleness, even though the data objects are, in many circumstances, not actually moved within the cache. Similarly, in FIG. 3B, the nodes in the populated cache 310 are shown in order of “staleness” where more stale nodes are near the bottom of node cache portion 106-2 and less stale (e.g., fresher) data objects are near the top of node cache portion 106-2, as nodes are refreshed, they are reordered in the cache to represent their staleness, even though nodes are, in many circumstances, not actually moved within the cache.


In FIG. 3B, in response to a request (e.g., “request 1”) for data object 61, computer system 102 determines that data object 61 is not in data object cache portion 106-1 in populated cache 310-a. Subsequently, computer system 102 traverses through tiered data structure 104 in FIG. 3A by traversing, in sequence, root node 302, internal node 304-1, internal node 304-3 to identify and retrieve leaf node 306-4, which includes data object 61. When traversing tiered data structure 104, computer system 102 is able to use a number of cached nodes to improve response time (e.g., by using root node 302, internal node 304-1 and internal node 304-3 to determine that leaf node 306-4 has be retrieved from tiered data structure 104). Computer system 102 caches the traversed nodes in node cache portion 106-2 and caches data object 61 in data object cache portion 106-1 as shown in updated cache 310-b in FIG. 3B. In order to make room for the traversed nodes and retrieved data object, data object 2 and leaf node 306-1 are evicted from cache 106 in accordance with a cache eviction policy, as shown in updated cache 310-b in FIG. 3B.


In FIG. 3B, in response to a request (e.g., “request 2”) for data object 25, computer system 102 determines that data object 25 is in data object cache portion 106-1 in populated cache 310-b. As data object 25 is already in data object cache portion 106-1, computer system 102 does not traverse tiered data structure 104 to retrieve data object 25, because data object 25 is retrieved from cache 106. In conjunction with being retrieved, data object 25 is refreshed in data object cache portion 106-1 so that it is less stale than object 61 rather than being more stale than data object 61, as shown in updated cache 310-c in FIG. 3B. In some embodiments, data object 25 is identified in data object cache portion 106-1 using a hash table to locate a portion of data object cache portion 106-1 that includes data object 25. As no new data objects or nodes were added to cache 106, no data objects or nodes are evicted from cache 106.


In FIG. 3B, in response to a request (e.g., “request 3”) for data object 70, computer system 102 determines that data object 70 is not in data object cache portion 106-1 in populated cache 310-c. Subsequently, computer system 102 traverses through tiered data structure 104 in FIG. 3A by traversing, in sequence, root node 302, internal node 304-1, internal node 304-4 to identify and retrieve leaf node 306-5, which includes data object 70. When traversing tiered data structure 104, computer system 102 is able to use a number of cached nodes to improve response time (e.g., by using root node 302 and internal node 304-1 to determine that internal node 304-4 and leaf node 306-5 have to be retrieved from tiered data structure 104). Computer system 102 caches the traversed nodes in node cache portion 106-2 and caches data object 70 in data object cache portion 106-1 as shown in updated cache 310-d in FIG. 3B. In order to make room for the traversed nodes and retrieved data object, data object 33, internal node 304-3, and leaf node 306-3 are evicted from cache 106 in accordance with a cache eviction policy, as shown in updated cache 310-d in FIG. 3B.


While the preceding examples have been shown with a small number of data objects and nodes, it should be understood that in a typical cache, a much larger number of data objects and nodes are stored in the cache and similar processes are performed. For example in an 2 GB (gigabyte) DRAM cache with a 1 GB data object cache portion, a 1 gigabyte node cache portion, an average node size of 8 KB (kilobytes) and an average data object size of 1 KB, the data object cache portion would hold approximately 1 million data objects and the node cache portion would hold approximately 250,000 nodes. In some embodiments, only internal nodes 304 are cached in node cache portion 106-2. In some embodiments, root node 302 and leaf nodes 306 are cached in node cache portion 106-2, but most leaf nodes are quickly evicted from node cache portion 106-2, while internal nodes 304 are frequently used and are thus frequently refreshed in cache 106, so that the node cache portion 106-2 includes primarily internal nodes 304 during normal operation (e.g., 50% or more of the capacity of node cache portion 106-2 is occupied by internal nodes). Using a data object cache in addition to a node cache instead of solely using a node cache improves the performance of the cache by increasing the likelihood that a requested data object will be available from the cache. For example, using a 1 GB data object cache in addition to a 1 GB node cache approximately quadruples the object capacity of the cache as compared with a 2 GB node cache. Additional details regarding efficient cache utilization in a tiered data structure are described below with reference to method 400 and FIGS. 4A-4E.



FIG. 3C illustrates an example of performing conditional updates for reducing frequency of traversals (e.g., in a tiered data structure), in accordance with some embodiments. In FIG. 3C, computer system 102 (FIGS. 1 and 2) detects (320) a request, received from an internal requestor or an external requestor, to access one or more data objects (e.g., data object 59, which is in leaf node 306-4). In some circumstances, when the request is detected, tiered data structure 104 does not have any nodes locked (e.g., read locked or read/write locked) by computer system 102. Although, in some circumstances one or more other computer systems using the same tiered data structure optionally lock one or more of the nodes of tiered data structure 104 when they are using those nodes. Assuming that the requested data object(s) are not available in a cache (e.g., as described above with reference to FIG. 3B), computer system 102 traverses (322) tiered data structure 104, as shown in FIG. 3C, to reach the node (e.g., leaf node 306-4) that includes the requested data object(s) (e.g., data object 59).


After identifying the leaf node that includes the requested data object, computer system 102 locks (323) the leaf node that includes the requested data object, as shown in FIG. 5C, where leaf node 306-4 is locked, and performs one or more additional operation (e.g., operations 324-326), while the leaf node is locked (e.g., read or read/write locked). After locking the leaf node, computer system 102 transmits (324) a conditional-update communication to a requestor, detects (325) a constitutional update response, and performs (326) one or more operations based on the conditional update response. For example, computer system 102 performs a conditional write operation where the requestor decides whether or not to perform the write operation based on the current value of the data object. As another example, computer system 102 performs a read-modify-write operation by returning a current value of the data object to the requestor in the conditional-update communication. Other examples of conditional update operations include “fetch and op” operations and “compare and swap” operations. In circumstances where the condition for the conditional update operation is not met, then the operation performed based on the conditional-update response optionally includes deciding not to perform any update on the requested data object.


In some circumstances, the detected request (e.g., detected in operation 320) includes a request to access multiple data objects that are in a single leaf node, in which case operations 324-326 are, optionally, repeated for two or more of the multiple data objects, so as to reduce the number of traversals of tiered data structure 104. After the operation(s) based on the conditional-update response have been performed, computer system 102 unlocks (328) the leaf node (e.g., leaf node 306-4) that was locked in response to the request to access the requested data object. Additional details regarding performing conditional updates for reducing frequency of data modification operations (e.g., in a tiered data structure) are described below with reference to method 500 and FIGS. 5A-5C.



FIGS. 3D-3F illustrate examples of compaction of information in a tiered data structure, in accordance with some embodiments. FIG. 3D shows an example leaf node 306-4 from tiered data structure 104 in FIG. 3A. Leaf node 306-4 includes data for data objects 59, 60, 61, 63 and 66. For each of these data objects (e.g., DO59, DO60, DO61, DO63, DO66), leaf node 306-4 includes a corresponding fixed length header (H59, H60, H61, H63, and H66, respectively) and corresponding metadata (e.g., M59, M60, M61, M63, and M66, respectively). The fixed length headers include a metadata type in embodiments where there are a plurality of different metadata types for metadata of the data objects, and an offset (e.g., a number of bytes) from a particular portion of the leaf node (e.g., a beginning or an end of the leaf node) to the location of the data object in the leaf node. The fixed length headers each have the same length, and can, thus, be used to perform a binary search through data objects in the leaf node. In some embodiments, the fixed length headers are packed to the left in the leaf node and the data objects and metadata are packed to the right in the leaf node, so that there is a growing area in the middle of the leaf node that increases or decreases in size as additional data objects are added to, or removed from, the leaf node. Packing the headers and data objects in different directions enables both the headers and the data objects to have fixed points to refer to when the data objects and nodes are identified by offsets (e.g., the headers can be identified based on an offset from a left edge of the leaf node, and the data objects and metadata can be identified based on an offset from a right edge of the leaf node).


The data objects (e.g., DO59, DO60, DO61, DO63, DO66) in leaf node 306-4 each include unique key information (e.g., K59, K60, K61, K63, K66, respectively) and a corresponding value (e.g., V59, V60, V61, V63, V66, respectively). In some embodiments, the unique key information for some of the data objects is a full unique key for the data objects, while the unique key information for other data objects is a portion of a unique key for the data objects, and the metadata for these data objects indicates a location of a key prefix that is shared with one or more other data objects that can be used to recreate the unique key for the data object in combination with the unique key information stored with the data object. For example, data object 59 includes a full unique key in unique key information K59, while data object 60 includes a partial key in unique key information K60 and metadata M60 associated with data object 60 is used to identify a location of a key prefix (e.g., a portion of K59 that serves as a key prefix for data object 60 and, in combination with unique key information K60 can be used to determine a unique key for data object 60). Similarly, data object 61 includes a partial key in unique key information K61 and metadata M61 associated with data object 61 is used to identify a location of a key prefix (e.g., a portion of K59 that serves as a key prefix for data object 61 and, in combination with unique key information K61 can be used to determine a unique key for data object 61).


Metadata (e.g., M59, M60, M61, M63, and M66) for a corresponding data object optionally includes one or more of the following: key length information 334 indicating a length of unique key information associated with the corresponding data object; data length information 336 indicating a length of the corresponding data object or the value of the corresponding data object; prefix offset information 338 that indicates a location of a start of a key prefix for the corresponding data object; prefix length information 340 that indicates a length of the key prefix for the corresponding data object; data overflow pointer 342 that indicates a location of data for the corresponding data object that is too large to fit in the leaf node; and global version information 344 that indicates a version of the corresponding data object. In some embodiments, the global version information 344 includes information identifying the order of each change to data objects in tiered data structure 104 (FIGS. 1 and 2) or data objects in data storage system 101 (FIGS. 1 and 2), which can be used to determine whether a change to a first data object occurred before or after a change to a second, different, data object.


In some embodiments different data objects have different types of metadata with different lengths, sometimes called variable-length metadata. Using variable length metadata enables shorter metadata to be used in many situations, and using shorter metadata increases the number of data objects that can be stored in a leaf node. As one example, there are four types of metadata, type-0 metadata, type-1 metadata, type-2 metadata and type-3 metadata. Type-0 metadata is used when the data object has the same key prefix, key length, and data length as the preceding data object, in which case the metadata includes only global version information 344 (e.g., represented as a 64-bit unsigned integer), and other information such as key prefix location, data length and key length are determined by looking at the metadata corresponding to the preceding data object. Type-1 metadata is used when the data object has a key length and data length that can each fit in a single byte and data that fits in the leaf node, in which case the metadata includes key length information 334 (e.g., represented as an 8-bit unsigned integer), data length information 336 (e.g., represented as an 8-bit unsigned integer), prefix offset information 338 (e.g., represented as an 16-bit unsigned integer), prefix length information 340 (e.g., represented as an 8-bit unsigned integer), and global version information 344 (e.g., represented as a 64-bit unsigned integer). Type-2 metadata is used when the data object has a key length and data length that can each fit in two bytes, in which case the metadata includes key length information 334 (e.g., represented as an 16-bit unsigned integer), data length information 336 (e.g., represented as an 16-bit unsigned integer), prefix offset information 338 (e.g., represented as an 16-bit unsigned integer), prefix length information 340 (e.g., represented as an 16-bit unsigned integer), data overflow pointer 342 (e.g., represented as a 64-bit unsigned integer), and global version information 344 (e.g., represented as a 64-bit unsigned integer). Type-3 metadata is used for data objects that do not fit in the other categories, in which case the metadata includes key length information 334 (e.g., represented as an 32-bit unsigned integer), data length information 336 (e.g., represented as an 32-bit unsigned integer), prefix offset information 338 (e.g., represented as an 16-bit unsigned integer), prefix length information 340 (e.g., represented as an 32-bit unsigned integer), data overflow pointer 342 (e.g., represented as a 64-bit unsigned integer), and global version information 344 (e.g., represented as a 64-bit unsigned integer). Type-3 metadata is the most flexible metadata type, but is also the largest of these four metadata types. Enabling the use of other types of metadata (e.g., type-0, type-1, and type-2) saves space in the leaf node when type-3 metadata is not needed to store all of the relevant metadata for a data object. While the example above describes four types of metadata, the principles described above (e.g., using a shorter formats for metadata where the shorter format enables all of the necessary metadata information to be conveyed by the shorter metadata) would apply equally to other types of metadata and thus, in principle, any number of types of metadata could be used in an analogous manner.



FIG. 3E shows an example, of deleting a data object from leaf node 306-4. In the upper part of FIG. 3E, before data object 63 has been deleted, leaf node 306-4 is shown with highlighting in black to indicate the information in leaf node 306-4 that will be deleted when the deletion operation is performed. After data object 63 has been deleted, header H63 is deleted from leaf node 306-4, as shown in the lower part of FIG. 3E, and the remaining headers (e.g., H59, H60, H61, and H66) are repacked against the left edge of leaf node 306-4. Additionally, after data object 63 has been deleted, data object DO63 and corresponding metadata M63 are deleted as shown in the lower part of FIG. 3E, and the remaining data objects (e.g., DO59, DO60, DO61, and DO66) and metadata (e.g., M59, M60, M61, and M66) are repacked against the right edge of leaf node 306-4. Additionally, before data object 63 was deleted, data object 66 relied on a portion of the key of data object 63 as a key prefix for data object 66. Thus, after data object 63 and its corresponding unique key information K63 is deleted, data object 66 can no longer rely on the portion of the key of data object 63 as a key prefix. Thus, in FIG. 3E, unique key information K66 for data object 66 is updated to include a full unique key for data object 66, and metadata M66 is updated to include a null value for the prefix offset information to indicate that there is no key prefix for data object 66 and that the unique key information K66 for data object 66 includes a full unique key. Alternatively, in some circumstances, computer system 102 determines that there is another data object (e.g., data object 59) in leaf node 306-4 that is associated with unique key information that could be used as a new key prefix for data object 66, and unique key information K66 is updated to include a portion of the unique key for data object 66 that, when combined with the new key prefix can be used to generate the full unique key for data object 66, and metadata M66 is updated to point to unique key information (e.g., K59) for the other data object so that a portion of unique key information (e.g., K59) for the other data object can be used as a key prefix for data object 66. Additionally, in many circumstances, repacking the data objects and headers as described above after deleting data object 63 will change locations of data objects, metadata and headers relative to the locations from which offsets identifying locations of these elements are measured, and thus after a data object, header, and metadata have been deleted, computer system 102 updates the offset information in the header and metadata corresponding to one or more of the other data objects (e.g., data objects that remain in leaf node 306-4 after to deleting data object 63).



FIG. 3F shows an example, of adding a data object from leaf node 306-4. In the upper part of FIG. 3F, before data object 65 has been added, leaf node 306-4 is shown with data object DO65 that is to be added to leaf node 306-4. After data object 65 has been added, new header H65 is added in between header H63 and header H66, as shown in the lower part of FIG. 3F, and the headers (e.g., H59, H60, H61, H63, H65, and H66) are repacked against the left edge of leaf node 306-4. Additionally, after data object 65 has been added, data object DO65 and corresponding metadata M65 are added to leaf node 306-4 as shown in the lower part of FIG. 3F, and the data objects (e.g., DO59, DO60, DO61, DO63, DO65, and DO66) and metadata (e.g., M59, M60, M61, M63, M65, and M66) are repacked against the right edge of leaf node 306-4. Additionally, before data object 65 was added, data object 66 relied on a portion of the key of data object 63 as a key prefix for data object 66 and data object 63 was adjacent to metadata M66 for data object 66. Thus, after data object 65 is added in between data object 63 and data object 66, metadata M66 of data object 66 is updated to indicate a different offset for the key prefix for data object 66, because the relative position between metadata M66 and unique key information K63 has changed. Moreover, in FIG. 3F, newly added data object 65 is also able to use a portion of unique key information K63 as a key prefix, and thus metadata M65 of data object 65 is updated to identify a portion of K63 as a key prefix that can be combined with unique key information K65 to generate a full unique key for data object 65. Additionally, in many circumstances, repacking the data objects and headers as described above after adding data object 65 will change locations of data objects, metadata and headers relative to the locations from which offsets identifying locations of these elements are measured, and thus after a new data object, new header and new metadata have been inserted, computer system 102 updates the offset information in the header and metadata corresponding to one or more of the other data objects (e.g., data objects that were in leaf node 306-4 prior to adding data object 65).


In some situations one or more data objects are updated without adding or deleting a data object from leaf node 306-4. However, even though a data object has not been added or deleted, updating a data object will, in some circumstances change a size of the data object (e.g., by changing a type of metadata used by the data object to a smaller or larger size of metadata or by changing a length of the data to a smaller or larger length). The change in the data object or associated metadata will, in many circumstances, change locations of data objects, metadata and headers relative to the locations from which offsets identifying locations of these elements are measured, and thus after a data object or metadata has been updated, computer system 102 updates the offset information in the header and metadata corresponding to one or more of the other data objects. Additional details regarding compaction of information in a tiered data structure are described below with reference to method 600 and FIGS. 6A-6D.


Attention is now directed to FIGS. 4A-4E, which illustrate a method 400 for efficient cache utilization in a tiered data structure, in accordance with some embodiments. Method 400 is, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of one or more computer systems (e.g., computer system 102, FIG. 2). Each of the operations shown in FIGS. 4A-4E typically corresponds to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., memory 206 of computer system 102 in FIG. 2). The computer readable storage medium optionally (and typically) includes a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the computer readable storage medium typically include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted or executed by one or more processors. In various embodiments, some operations in method 400 are combined and/or the order of some operations is changed from the order shown in FIGS. 4A-4E.


A computer system (e.g., computer system 102 in FIGS. 1-2) detects (402) a request, received from a requestor (e.g., an internal requestor 108 or an external requestor 110 in FIG. 1), to access a first data object stored in a tiered data structure (e.g., tiered data structure 104 in FIGS. 1 and 3A), the tiered data structure stored in one or more memory devices, wherein the tiered data structure includes a plurality of internal (non-leaf) nodes (e.g., nodes between a root node and the leaf nodes in the tiered data structure) and a plurality of leaf nodes. For example, when the tiered data structure is a B−Tree or B−Tree like structure (e.g., a B+tree or a B*tree, or the like) that includes a root node, two or more internal (parent) nodes, and two or more leaf (external child) nodes. In a B−Tree, the topmost node is sometimes called the root node. In a B−Tree, an internal node (also known as an inner node, or inode for short, parent node or branch node) is any node of the B−Tree that has child nodes other than the root node. Similarly, in a B−Tree, a leaf node (also known as an outer node, external node, or terminal node) is any node that does not have child nodes.


In some circumstances, two or more of the leaf nodes each include (404) multiple data objects, each of the data objects including unique key information (e.g., a unique key or information from which a unique key can be identified such as a shortened key and a location/length of a key prefix) and a corresponding value. In some embodiments, the corresponding value is data. In some embodiments, the corresponding value is a pointer identifying a location where the data is stored. In some embodiments, the data objects are contiguous data objects where the unique key information for a respective contiguous data object is adjacent or substantially adjacent to the corresponding value for the respective contiguous data object or other data for the respective contiguous data object that is adjacent to the corresponding value. In some embodiments, the data objects are split data objects where the unique key information for a respective split data object is separated from the corresponding value for the respective split data object by other data for other data objects and the unique key information for the respective split data object is stored with a pointer that identifies a location of the corresponding value for the respective split data object.


In response to detecting the request to access the first data object, the computer system determines (406) whether the first data object is stored in a cache (e.g., data object cache portion 106-1 in FIGS. 1 and 3B) that includes a plurality of data objects from the tiered data structure. The data objects stored in the cache are stored separately from the leaf node to which they correspond in the tiered data structure (e.g., such that a first data object can be retrieved from the cache without retrieving a leaf node that includes data objects that are adjacent to the first data object and without traversing through one or more internal nodes of the tiered data structure). In some embodiments, some or all of the cache is in memory of the computer system. In some embodiments some or all of the cache is remote from the computer system and the cache is (operatively) in communication with the computer system via one or more communication systems.


In some embodiments, the cache is stored (410) in high-speed memory (e.g., RAM or other non-persistent memory with a high read/write rate that loses stored information when power is shut off to the memory, or even high-speed persistent memory). In some circumstances, high-speed persistent memory is more expensive than slower persistent memory and thus the amount of high-speed persistent memory is smaller than the amount of slower persistent memory, so as to reduce device cost. In some embodiments, the tiered data structure is stored in persistent memory that has a slower average read and/or write speed than the high-speed memory (e.g., wherein the persistent memory is flash memory, any suitable three-dimensional non-volatile memory such as vertical NAND, RRAM (also called ReRAM), etc., hard drive disks, or other persistent memory that maintains its state even when power is shut off to the memory). In some embodiments, the cache is populated (412) with data objects retrieved by traversing the tiered data structure in response to prior requests to access data objects from the tiered data structure.


After determining whether the first data object is stored in the cache, in accordance with a determination that the first data object is stored in the cache, the computer system returns (414) the first data object from the cache to the requestor. For example, data object 25 is retrieved from data object cache portion 106-1 in response to request 2 in FIG. 3B, as described in greater detail above. In some circumstances, even when the first data object is stored in the cache, one or more other data objects included in the leaf node for the first data object are not included in the cache (e.g., because those data objects are not frequently used data objects). For example in FIG. 3B, leaf node 306-2 and data objects 26-30 (which are stored in leaf node 306-2 along with data object 25) are not stored in data object cache portion 106-1, even though data object 25 is stored in data object cache portion 106-1. Forgoing storing some data objects from one or more leaf nodes instead of storing the whole leaf node improves the utility of the cache because more of the frequently used data objects can be stored in the cache than if full leaf nodes were stored in the cache, because less frequently used data objects that happen to reside in the same leaf node as more frequently used objects do not need to be stored in the cache. In many circumstances (e.g., where frequently used data objects are distributed among a large number of leaf nodes), storing frequently used data objects in the cache separately from their corresponding leaf nodes effectively increases the size of the cache, as less unnecessary information is stored in the cache.


In some embodiments, returning the first data object from the cache to the requestor includes locating (416) the first data object in the cache using a hash table to map a unique key of the first data object that is included with the request to a portion of the cache (sometimes referred to as a “bucket”) that includes the first data object (and, optionally, one or more other data objects which have unique keys that are mapped to the portion of the cache by the hash table). In some embodiments, returning the first data object from the cache to the requestor includes locating (418) the first data object in the cache without reference to the tiered data structure (e.g., without traversing the tiered data structure and without retrieving the leaf node that includes the first data object). In some embodiments, the cache has a predefined size, at a respective point in time, the cache is populated with recently accessed data objects for a plurality of leaf nodes (e.g., in response to prior requests to access the data objects) and the predefined size of the cache is smaller (420) than the aggregate size of the plurality of leaf nodes. For example, more data objects are stored in the cache than could be stored in the cache if each of the data objects was stored with its corresponding leaf node. Thus, in some embodiments, separately caching data objects enables a larger number of recently used data objects to be stored in the cache than would be able to be stored if full leaf nodes were cached, as described above with reference to FIG. 3B. In some embodiments, the respective point in time is a point in time after (422) one or more data objects have been evicted from the cache (e.g., the cache has reached a maximum capacity and one or more least recently used objects, including the one or more evicted data objects, have been being evicted from the cache to make room for more recently used objects).


After determining whether the first data object is stored in the cache, in accordance with a determination that the first data object is not stored (424) in the cache (e.g., because a search for the first data object in the cache failed to find the first data object in the cache), the computer system traverses (426) the tiered data structure to a leaf node that includes the first data object. In some embodiments, in conjunction with traversing the tiered data structure to the leaf node for the first data object, the computer system caches (428) internal nodes that are traversed between a root node and the leaf node for the first data object. For example in FIG. 3B, internal node 304-4 is cached after being used to retrieve data object 70 in response to request 3. In some embodiments, the nodes are cached in the same cache as the data objects (e.g., cache 106 in FIGS. 1 and 3B). In some embodiments, the nodes are cached in a node cache (e.g., node cache portion 106-2 in FIGS. 1 and 3B) that is separate from the data object cache (e.g., data object cache portion 106-1 in FIGS. 1 and 3B) used for the data objects. In some embodiments, the leaf node for the first data object is also cached. In some embodiments, the node cache is smaller than the data object cache. In some embodiments, the node cache can store a smaller number of nodes than the data object cache can store data objects. The node cache is, optionally, governed by a least recently used (LRU) cache eviction policy so that when new nodes are stored in the node cache, the least recently used nodes in the node cache are evicted to make room for the new nodes. In some embodiments, traversing the tiered data structure to the leaf node for the first data object includes retrieving (430) one or more nodes that were previously cached (e.g., stored in a node portion of the cache or in a separate node cache) during previous traversals of the tiered data structure (e.g., in response to prior requests to access data objects). For example, in FIG. 3B, internal nodes 304-1 and 304-3 are used to respond to request 1.


After traversing the tiered data structure, the computer system returns (432) the first data object from the leaf node for the first data object in the tiered data structure to the requestor. In some embodiments, in accordance with a determination that the first data object is not stored in the cache, after returning the first data object from the leaf node for the first data object, the computer system stores (434) the first data object in the cache. In some embodiments, in conjunction with storing the first data object in the cache, in accordance with a determination that cache eviction criteria have been met, the computer system evicts (435) one or more other data objects from the cache (e.g., evicting the least recently used data objects in accordance with a least recently used (LRU) cache eviction policy or evicting the oldest data objects in accordance with a first in first out (FIFO) cache eviction policy). In some embodiments, the computer system also caches (436) the leaf node for the first data object in the cache. For example, in FIG. 3B, leaf node 306-4 that includes data object 61 is cached in node cache portion 106-2 when data object 61 is retrieved and cached in data object cache portion 106-1.


In some embodiments, the cache has a data object portion (e.g., data object cache portion 106-1 in FIGS. 1 and 3B) for storing data objects separately from their corresponding leaf nodes and a node portion (e.g., node cache portion 106-2 in FIGS. 1 and 3B) for storing leaf nodes and internal nodes of the tiered data structure. In some embodiments, in conjunction with returning the first data object from the leaf node for the first data object in the tiered data structure to the requestor (e.g., when the first data object is not stored in the cache), the computer system caches (438) the first data object in the data object portion of the cache and caches the leaf node for the first data object in the node portion of the cache. After caching the first data object and the leaf node for the first data object, the computer system accesses (440) a different data object in a different leaf node of the tiered data structure (e.g., in response to detecting a request to access the different data object received from the requestor or another, different, requestor). In conjunction with accessing the different data object, the computer system caches (442) the different data object in the data object portion of the cache while maintaining the first data object in the data object portion of the cache and caches (444) the different leaf node in the node portion of the cache and evicts the leaf node for the first data object from the node portion of the cache. In some embodiments, the leaf node for the first data object is evicted before caching the different leaf node to make room for the different leaf node. For example in FIG. 3B, in response to request 3, leaf node 306-3 is evicted from node cache portion 106-2, while data object 58 (which is from leaf node 306-3) remains in data object cache portion 106-1.


In some circumstances, the computer system detects (446) an insert request to insert a second data object into the tiered data structure. In some embodiments, in response (448) to detecting the insert request, the computer system traverses (450) the tiered data structure to a leaf node for the second data object and inserting the second data object into the leaf node for the second data object (e.g., the leaf node for the first data object or another leaf node that is different from the leaf node for the first data object). In some embodiments, inserting the second data object in the tiered data structure causes a leaf node and optionally one or more internal nodes to be split. In some embodiments, in accordance with a determination that the second data object was successfully inserted into the tiered data structure, the computer system stores (452) the second data object in the cache separately from the leaf node for the second data object. In some embodiments, if the second data object is not successfully inserted into the tiered data structure, the computer system forgoes storing the second data object in the cache (e.g., the second data object is not stored in the cache) and an error message is optionally sent to the requestor indicating that the second data object was not inserted.


In some circumstances, the computer system detects (454) an update request to update a third data object in the tiered data structure. In some embodiments, in response (456) to detecting the update request, the computer system traverses (458) the tiered data structure to a leaf node for the third data object and updating the third data object in the leaf node for the third data object (e.g., the leaf node for the first data object or another leaf node that is different from the leaf node for the first data object). In some embodiments, in accordance with a determination that the third data object was successfully updated in the tiered data structure, the computer system stores (460) the updated third object in the cache separately from the leaf node for the third data object. In some embodiments, if the third data object is not successfully updated in the tiered data structure, the computer system forgoes updating the third data object in the cache (e.g., the third data object is not updated in the cache) and an error message is optionally sent to the requestor indicating that the third data object was not updated. In some embodiments, if a prior version of the third data object is already in the cache, then storing the updated third object in the cache includes updating/replacing the prior version of the third data object in the cache with the updated third object, whereas if a prior version of the third object is not stored in the cache, the updated third object is stored in the cache without needing to delete or overwrite a prior version of the third object.


In some circumstances, the computer system detects (462) a delete request to delete a fourth data object in the tiered data structure. In some embodiments, in response (464) to detecting the delete request, the computer system traverses (466) the tiered data structure to a leaf node for the fourth data object and deleting the fourth data object from the leaf node for the fourth data object (e.g., the leaf node for the first data object or another leaf node that is different from the leaf node for the first data object). In some embodiments, deleting the fourth data object in the tiered data structure causes two or more leaf nodes and, optionally, two or more internal nodes to be combined. In some embodiments, in accordance with a determination that the fourth data object was successfully deleted in the tiered data structure and is stored in the cache, the computer system deletes (468) the fourth object from the cache. In some embodiments, if the fourth data object is not successfully deleted from the tiered data structure, the computer system forgoes deleting the fourth data object from the cache (e.g., the fourth data object is not deleted from the cache) and an error message is optionally sent to the requestor indicating that the fourth data object was not deleted. In situations where the fourth object is not stored in the cache (e.g., in accordance with a determination that the fourth object is not in the cache), the fourth object does not need to be deleted from the object cache.


It should be understood that the particular order in which the operations in FIGS. 4A-4E have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 500 and 600) are also applicable in an analogous manner to method 400 described above with respect to FIGS. 4A-4E. For example, the tiered data structures, data objects, nodes, and unique key information, described above with reference to method 400 optionally have one or more of the characteristics of the tiered data structures, data objects, nodes, and unique key information described herein with reference to other methods described herein (e.g., method 500 and 600). For brevity, these details are not repeated here.


Attention is now directed to FIGS. 5A-5C, which illustrate a method 500 for performing conditional updates for reducing frequency of data modification operations (e.g., in a tiered data structure), in accordance with some embodiments. Method 500 is, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of one or more computer systems (e.g., computer system 102, FIG. 2). Each of the operations shown in FIGS. 5A-5C typically corresponds to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., memory 206 of computer system 102 in FIG. 2). The computer readable storage medium optionally (and typically) includes a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the computer readable storage medium typically include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted or executed by one or more processors. In various embodiments, some operations in method 500 are combined and/or the order of some operations is changed from the order shown in FIGS. 5A-5C.


A computer system (e.g., computer system 102 in FIGS. 1-2) detects (502) a request, received from a requestor (e.g., an internal requestor 108 or an external requestor 110 in FIG. 1), to access a first data object stored in a tiered data structure (e.g., tiered data structure 104 in FIGS. 1 and 3A), the tiered data structure stored in one or more memory devices, wherein the tiered data structure includes a plurality of internal (non-leaf) nodes (e.g., nodes between a root node and the leaf nodes in the tiered data structure) and a plurality of leaf nodes. For example, when the tiered data structure is a B−Tree or B−Tree like structure (e.g., a B+tree or a B*tree, or the like) that includes a root node, two or more internal (parent) nodes, and two or more leaf (external child) nodes. In a B−Tree, the topmost node is sometimes called the root node. In a B−Tree, an internal node (also known as an inner node, inode for short, parent node or branch node) is any node of the B−Tree that has child nodes other than the root node. Similarly, in a B−Tree, a leaf node (also known as an outer node, external node, or terminal node) is any node that does not have child nodes.


In some circumstances, two or more of the leaf nodes each include (504) multiple data objects, each of the data objects including unique key information (e.g., a unique key or information from which a unique key can be identified such as a shortened key and a location/length of a key prefix) and a corresponding value. In some embodiments, the corresponding value is data. In some embodiments, the corresponding value is a pointer identifying a location where the data is stored. In some embodiments, the data objects are contiguous data objects where the unique key information for a respective contiguous data object is adjacent or substantially adjacent to the corresponding value for the respective contiguous data object or other data for the respective contiguous data object that is adjacent to the corresponding value. In some embodiments, the data objects are split data objects where the unique key information for a respective split data object is separated from the corresponding value for the respective split data object by other data for other data objects and the unique key information for the respective split data object is stored with a pointer that identifies a location of the corresponding value for the respective split data object. In some embodiments, the request to access a first data object includes (506) a conditional request to modify the first data object (e.g., a request that may or may not result in modifying the first data object depending on one or more conditions such as the current value of the first data object).


In some embodiments, the computer system performs (508) one or more operations in response to detecting the request to access the first data object. The computer system retrieves (510) a leaf node that includes the first data object. In some embodiments, retrieving the leaf node includes traversing (512) the tiered data structure by navigating through one or more internal nodes to the leaf node that includes the first data object, and after traversing through the one or more internal nodes, the computer system read-locks (514) the one or more internal nodes that were traversed to reach the leaf node that includes the first data object. In response to detecting the request to access the first data object, the computer system also locks (516) the leaf node that includes the first data object. In some embodiments, the leaf node that includes the first data object is write-locked (518) while the first conditional-update communication is transmitted and the response is received. For example, in FIG. 3C, leaf node 306-4 is write locked while the conditional-update communication is transmitted and the response is received


In order to improve the efficiency of performing the conditional update operation, the computer system performs a plurality of operations while the leaf node that includes the first data object is locked. In particular, while the leaf node that includes the first data object is (520) locked, the computer system transmits (522), to the requestor, a first conditional-update communication that includes an indication of the current value of the first data object. In some embodiments, the conditional-update communication includes (524) an executable callback object.


After transmitting the first conditional-update communication, the computer system detects (526) a first conditional-update response corresponding to the first data object received from the requestor in response to the first conditional-update communication (e.g., based on the indication of the current value of the first data object). In some embodiments, the conditional-update response corresponds (528) to a result generated based on execution of the callback object.


In response to detecting the first conditional-update response corresponding to the first data object, the computer system performs (530) one or more operations based on the first conditional-update response corresponding to the first data object. In some embodiments, the first conditional-update communication provides (532) information that enables the requestor to determine whether or not to update the value of the first data object based on a current value of the first data object and performing the one or more operations based on the first conditional-update response includes determining whether or not the conditional-update response includes a request to update the value of the first data object. In accordance with a determination that the first conditional-update response includes a request to update the value of the first data object, the computer system updates (534) the value of the first data object in accordance with the first conditional-update response. In accordance with a determination that the first conditional-update response does not include a request to update the value of the first data object (e.g., the first conditional-update response includes a request to maintain the value of the first data object or the first conditional-update response includes a request to end the update operation for the first data object without requesting that the value of the first data object be updated), the computer system forgoes (536) updating the value of the first data object (e.g., the first conditional-update response corresponding to the first data object enables performance of an update operation that is not a blind update).


In some circumstances, the request to access the first data object identifies (538) a plurality of data objects including the first data object. In some embodiments, while the leaf node that includes the first data object is locked, and after performing the one or more operations based on the first conditional-update response corresponding to the first data object, the computer system transmits (540), to the requestor, a second conditional-update communication that includes an indication of the current value of a second data object in the plurality of data objects. The computer system subsequently detects (542) a second conditional-update response corresponding to the second data object received from the requestor in response to the second conditional-update communication (e.g., based on the indication of the current value of the second data object) and, in response to detecting the second conditional-update response corresponding to the second data object, the computer system performs (544) one or more operations based on the second conditional-update response corresponding to the second data object. In some embodiments, this process is repeated for a number of different data objects in a predefined (key) order until an object is reached that is not in the leaf node that includes the first data object (e.g., as shown above in FIG. 3C with reference to operations 324-326).


After performing the one or more operations based on the first conditional-update response corresponding to the first data object, the computer system unlocks (546) the leaf node that includes the first data object (e.g., so that other read and/or write operations can be performed on the leaf node and/or data objects contained therein). For example, in FIG. 3C, leaf node 306-4 is unlocked in tiered data structure 104 after performing the operations based on the conditional-update response. In some embodiments (e.g., when the request to access the first data object identifies a plurality of data objects including the first data object), the leaf node that includes the first data object is unlocked after performing (548) the one or more operations based on the second conditional-update response corresponding to the second data object. For example, the leaf node that includes the first data object is unlocked in response to a determination that the request to access the first data object does not identify any additional data objects in the leaf node that includes the first data object.


It should be understood that the particular order in which the operations in FIGS. 5A-5C have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 400 and 600) are also applicable in an analogous manner to method 500 described above with respect to FIGS. 5A-5C. For example, the tiered data structures, data objects, nodes, and unique key information described above with reference to method 500 optionally have one or more of the characteristics of the tiered data structures, data objects, nodes, and unique key information described herein with reference to other methods described herein (e.g., method 400 and 600). For brevity, these details are not repeated here.


Attention is now directed to FIGS. 6A-6D, which illustrate a method 600 for compaction of information in a tiered data structure, in accordance with some embodiments. Method 600 is, optionally, governed by instructions that are stored in a non-transitory computer readable storage medium and that are executed by one or more processors of one or more computer systems (e.g., computer system 102, FIG. 2). Each of the operations shown in FIGS. 6A-6D typically corresponds to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., memory 206 of computer system 102 in FIG. 2). The computer readable storage medium optionally (and typically) includes a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The computer readable instructions stored on the computer readable storage medium typically include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted or executed by one or more processors. In various embodiments, some operations in method 600 are combined and/or the order of some operations is changed from the order shown in FIGS. 6A-6D.


A computer system (e.g., computer system 102 in FIGS. 1-2) detects (602) a request, received from a requestor (e.g., an internal requestor 108 or an external requestor 110 in FIG. 1), to access a first data object stored in a tiered data structure (e.g., tiered data structure 104 in FIGS. 1 and 3A), the tiered data structure stored in one or more memory devices. The tiered data structure includes (604) a plurality of internal (non-leaf) nodes (e.g., nodes between a root node and the leaf nodes in the tiered data structure) and a plurality of leaf nodes. For example, when the tiered data structure is a B−Tree or B−Tree like structure (e.g., a B+tree or a B*tree, or the like) that includes a root node, two or more internal (parent) nodes, and two or more leaf (external child) nodes. In a B−Tree, the topmost node is sometimes called the root node. In a B−Tree, an internal node (also known as an inner node, inode for short, parent node or branch node) is any node of the B−Tree that has child nodes other than the root node. Similarly, in a B−Tree, a leaf node (also known as an outer node, external node, or terminal node) is any node that does not have child nodes.


Furthermore, two or more of the leaf nodes each include (606) multiple data objects, each of the data objects including unique key information (e.g., a unique key or information from which a unique key can be identified such as a shortened key and a location/length of a key prefix) and a corresponding value. In some embodiments, the corresponding value is data. In some embodiments, the corresponding value is a pointer identifying a location where the data is stored. In some embodiments, the data objects are contiguous data objects where the unique key information for a respective contiguous data object is adjacent or substantially adjacent to the corresponding value for the respective contiguous data object or other data for the respective contiguous data object that is adjacent to the corresponding value. In some embodiments, the data objects are split data objects where the unique key information for a respective split data object is separated from the corresponding value for the respective split data object by other data for other data objects and the unique key information for the respective split data object is stored with a pointer that identifies a location of the corresponding value for the respective split data object. Additionally, the first data object is (608) uniquely identified by a first key. For example, in FIG. 3D, data object 60 relies on a portion of the key K59 of data object 59 is used, in combination with the unique key information K60 for data object 60 to generate a full unique key that uniquely identifies data object 60.


In response to detecting the request to access the first data object, the computer system retrieves (610) a leaf node that includes the first data object. In some embodiments, the data objects in the leaf node are sorted (612) by key in a predefined key order (e.g., the keys of the data objects in the leaf node are either monotonically increasing or monotonically decreasing from a beginning to an end of the leaf node). In some embodiments, each respective data object of a plurality of the data objects in the leaf node, including the first data object, includes metadata (614) that identifies a location of a key prefix for the key corresponding to the respective data object. In some embodiments, the metadata specifies a location (e.g., an offset to the start of the key prefix) and a length of the key prefix in the leaf node (e.g., metadata M60 for data object 60 includes prefix offset information 338 and prefix length information 340 in FIG. 3D). In some embodiments one or more of the data objects in the leaf node have a null prefix and the entire key is included in the metadata for these data objects with null prefixes. In some embodiments, one or more data objects in the leaf node include a full key and thus do not have metadata that identifies a location of a corresponding key prefix. In some embodiments, first metadata for the first data object has (616) a first length (e.g., metadata M61 for data object 61 is type-0 metadata in FIG. 3D) and second metadata for a second data object in the plurality of data objects has a second length (e.g., metadata M60 for data object 60 is type-1 metadata in FIG. 3D) that is different from the first length. In some embodiments, the first metadata has a first metadata format that is different from a second metadata format of the second data object. In some embodiments the first metadata is part of a contiguous first data object where the first metadata, first unique key information and first value are stored as a contiguous sequence of data (e.g., for data object 61, M61, K61 and V61 are stored contiguously in leaf node 306-4 as shown in FIG. 3D) and the second metadata is part of a contiguous second data object where the second metadata, second unique key information and second value are stored as a contiguous sequence of data (e.g., for data object 60, M60, K60 and V60 are stored contiguously in leaf node 306-4 as shown in FIG. 3D).


In some embodiments, the leaf node includes (618) a fixed length header for each of the plurality of data objects (e.g., headers H59, H60, H61, H63, and H66 in FIG. 3D). In some embodiments, the fixed length headers enable binary searching within the plurality of data objects. For each of the plurality of data objects, the fixed length header includes information indentifying a format of metadata included in the data object. In some embodiments, the fixed length header also includes a pointer identifying a location of the data object in the leaf node. In some circumstances, different data objects in the plurality of data objects have different formats of metadata. In some embodiments, the different formats of metadata have different fields and/or different lengths (e.g., so as to increase an amount of data that can be stored in the leaf nodes by using metadata with a reduced size when possible).


In some embodiments, the leaf node, as stored, is compressed. Thus, in some circumstances, when the stored leaf node is retrieved by the computer system it is still compressed. In such circumstances, after retrieving the leaf node and prior to identifying the first data object in the leaf node, the computer system decompresses (620) the leaf node. In some embodiments (e.g., if the content of the leaf node is modified while accessing the leaf node), the leaf node is recompressed after being modified and the compressed, modified, leaf node is stored.


After retrieving the leaf node that includes the first data object and, optionally, decompressing the leaf node, the computer system identifies (622) the first data object in the leaf node. In the process of identifying the first data object in the leaf node, the computer system combines (624) unique key information (e.g., a “shortened” or “truncated” key) of the first data object with a key prefix that is stored separately in the leaf node to generate a combined key. In some embodiments, the key prefix for the first data object is stored (626) as part of a second data object (e.g., as part of the unique key information of the second data information) that is stored before the first data object in predefined order (e.g., a key order) in the leaf node. In some embodiments, the key prefix includes (628) a predefined portion of a key (or unique key information) of a distinct second data object in the leaf node. For example, to retrieve data object 60 from leaf node 306-4, after leaf node 306-4 is retrieved, metadata M60 for data object 60 is retrieved and used to identify a key prefix that is a portion of key K59 for data object 59 and the key prefix (e.g., a specified portion of K59) is combined with unique key information K60 for data object 60 to generate a full unique key (or combined key) for data object 60, which is then available for comparison with the first key for the requested data object.


In the process of identifying the first data object in the leaf node, the computer system also determines (630) that the combined key matches the first key that uniquely identifies the first data object. In some embodiments, identifying the first data object includes (632) searching through the leaf node for the first data object by comparing the first key with a plurality of candidate keys for candidate data objects in the leaf node. For example, the computer system uses a binary search pattern where a middle key in a range of key values is compared to the first key and then if the first key is greater than the middle key a first subrange above the middle key is searched starting with a key in the middle of the first subrange, but if the first key is less than the middle key, a second subrange that is below the middle key is searched starting with a key in the middle of the second subrange. In some embodiments, a respective candidate key for a respective candidate data object is generated by combining unique key information for the respective candidate data object with a corresponding key prefix for the respective candidate data object to generate the respective candidate key. After identifying the first data object, the computer system provides (634) access to the first data object to the requestor.


In some circumstances, the computer system detects (636) a request to update the first data object in the leaf node. In some embodiments, in response (638) to detecting the request to update the first data object, the computer system updates (640) the value of the first data object, wherein updating the value of the first data object changes a location of the key prefix for the first data object in the leaf node. In some embodiments, updating the value of the first data object causes a change in an offset distance from a predefined point in the leaf node (e.g., an offset from a beginning or ending of the leaf node) to the data object that includes the key prefix for the first data object. In some embodiments, in response (638) to detecting the request to update the first data object, the computer system updates (642) the unique key information corresponding to the first data object to reflect the change in the location of the key prefix for the first data object. While changing the value of the respective data object does not change the key of the first data object, a change in size of the first data object will, in some circumstances, cause one or more data objects to be moved around in the leaf node in accordance with the change in size of the first data object. When data objects are moved around in the leaf node, pointers in one or more headers (e.g., offsets specified by one or more fixed length headers) and/or metadata for data objects (e.g., offsets to one or more the key prefixes) will, in some circumstances, be updated to account for the movement of the data objects in the leaf node.


In some circumstances, the computer system detects (644) a request to insert a new data object in the tiered data structure. In some embodiments, in response (646) to detecting the request to insert the new data object in the tiered data structure, the computer system identifies (648) a respective leaf node, of the plurality of leaf nodes in the tiered data structure, into which the new data object is to be inserted and identifies (650) a position in the respective leaf node that is after a prior data object in the respective leaf node in a predefined order. In some embodiments in response (646) to detecting the request to insert the new data object in the tiered data structure, the computer system determines (652) a prefix for the key of the respective data object based on a comparison between the key of the respective data object with the key of the prior data object and inserts (654) the data object into the respective leaf node along with an indication of a location in the leaf node of the prefix for the key of the respective data object. In some embodiments, the computer system also updates metadata (e.g., prefix information) that identifies a location of a prefix for one or more data objects that are after the new data object in the predefined order (e.g., data objects that point to a key prefix in a data object that is before the new data object in the predefined order). An example of adding a data object to a leaf node is described above in greater detail with reference to FIG. 3F.


In some circumstances, the computer system detects (656) a request to delete a respective data object in the leaf node that is before a subsequent data object in the leaf node, the respective data object having a key. In some embodiments, in response to detecting the request to delete the respective data object, and in accordance with a determination that the subsequent data object relies on a portion of the key of the respective data object as a key prefix for the subsequent data object, the computer system updates (658) the subsequent data object so that metadata of the subsequent data object does not rely on the portion of the key of the respective data object as the key prefix for the subsequent data object (e.g., by including the whole key in the subsequent data object or by relying on a portion of a key of a different data object in the leaf node). An example of deleting a data object from a leaf node is described above in greater detail with reference to FIG. 3E.


It should be understood that the particular order in which the operations in FIGS. 6A-6D have been described is merely exemplary and is not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to reorder the operations described herein. Additionally, it should be noted that details of other processes described herein with respect to other methods described herein (e.g., methods 400 and 500) are also applicable in an analogous manner to method 600 described above with respect to FIGS. 6A-6D. For example, the tiered data structures, data objects, nodes, and unique key information described above with reference to method 600 optionally have one or more of the characteristics of the tiered data structures, data objects, nodes, and unique key information described herein with reference to other methods described herein (e.g., method 400 and 500). For brevity, these details are not repeated here.


Semiconductor memory devices include volatile memory devices, such as dynamic random access memory (“DRAM”) or static random access memory (“SRAM”) devices, non-volatile memory devices, such as resistive random access memory (“ReRAM”), electrically erasable programmable read only memory (“EEPROM”), flash memory (which can also be considered a subset of EEPROM), ferroelectric random access memory (“FRAM”), and magnetoresistive random access memory (“MRAM”), and other semiconductor elements capable of storing information. Each type of memory device may have different configurations. For example, flash memory devices may be configured in a NAND or a NOR configuration.


The memory devices can be formed from passive and/or active elements, in any combinations. By way of non-limiting example, passive semiconductor memory elements include ReRAM device elements, which in some embodiments include a resistivity switching storage element, such as an anti-fuse, phase change material, etc., and optionally a steering element, such as a diode, etc. Further by way of non-limiting example, active semiconductor memory elements include EEPROM and flash memory device elements, which in some embodiments include elements containing a charge storage region, such as a floating gate, conductive nanoparticles, or a charge storage dielectric material.


Multiple memory elements may be configured so that they are connected in series or so that each element is individually accessible. By way of non-limiting example, flash memory devices in a NAND configuration (NAND memory) typically contain memory elements connected in series. A NAND memory array may be configured so that the array is composed of multiple strings of memory in which a string is composed of multiple memory elements sharing a single bit line and accessed as a group. Alternatively, memory elements may be configured so that each element is individually accessible (e.g., a NOR memory array). NAND and NOR memory configurations are exemplary, and memory elements may be otherwise configured.


The semiconductor memory elements located within and/or over a substrate may be arranged in two or three dimensions, such as a two dimensional memory structure or a three dimensional memory structure.


In a two dimensional memory structure, the semiconductor memory elements are arranged in a single plane or a single memory device level. Typically, in a two dimensional memory structure, memory elements are arranged in a plane (e.g., in an x-z direction plane) which extends substantially parallel to a major surface of a substrate that supports the memory elements. The substrate may be a wafer over or in which the layer of the memory elements are formed or it may be a carrier substrate which is attached to the memory elements after they are formed. As a non-limiting example, the substrate may include a semiconductor such as silicon.


The memory elements may be arranged in the single memory device level in an ordered array, such as in a plurality of rows and/or columns. However, the memory elements may be arrayed in non-regular or non-orthogonal configurations. The memory elements may each have two or more electrodes or contact lines, such as bit lines and word lines.


A three dimensional memory array is arranged so that memory elements occupy multiple planes or multiple memory device levels, thereby forming a structure in three dimensions (i.e., in the x, y and z directions, where the y direction is substantially perpendicular and the x and z directions are substantially parallel to the major surface of the substrate).


As a non-limiting example, a three dimensional memory structure may be vertically arranged as a stack of multiple two dimensional memory device levels. As another non-limiting example, a three dimensional memory array may be arranged as multiple vertical columns (e.g., columns extending substantially perpendicular to the major surface of the substrate, i.e., in the y direction) with each column having multiple memory elements in each column. The columns may be arranged in a two dimensional configuration (e.g., in an x-z plane), resulting in a three dimensional arrangement of memory elements with elements on multiple vertically stacked memory planes. Other configurations of memory elements in three dimensions can also constitute a three dimensional memory array.


By way of non-limiting example, in a three dimensional NAND memory array, the memory elements may be coupled together to form a NAND string within a single horizontal (e.g., x-z) memory device level. Alternatively, the memory elements may be coupled together to form a vertical NAND string that traverses across multiple horizontal memory device levels. Other three dimensional configurations can be envisioned wherein some NAND strings contain memory elements in a single memory level while other strings contain memory elements which span through multiple memory levels. Three dimensional memory arrays may also be designed in a NOR configuration and in a ReRAM configuration.


Typically, in a monolithic three dimensional memory array, one or more memory device levels are formed above a single substrate. Optionally, the monolithic three dimensional memory array may also have one or more memory layers at least partially within the single substrate. As a non-limiting example, the substrate may include a semiconductor such as silicon. In a monolithic three dimensional array, the layers constituting each memory device level of the array are typically formed on the layers of the underlying memory device levels of the array. However, layers of adjacent memory device levels of a monolithic three dimensional memory array may be shared or have intervening layers between memory device levels.


Then again, two dimensional arrays may be formed separately and then packaged together to form a non-monolithic memory device having multiple layers of memory. For example, non-monolithic stacked memories can be constructed by forming memory levels on separate substrates and then stacking the memory levels atop each other. The substrates may be thinned or removed from the memory device levels before stacking, but as the memory device levels are initially formed over separate substrates, the resulting memory arrays are not monolithic three dimensional memory arrays. Further, multiple two dimensional memory arrays or three dimensional memory arrays (monolithic or non-monolithic) may be formed on separate chips and then packaged together to form a stacked-chip memory device.


Associated circuitry is typically required for operation of the memory elements and for communication with the memory elements. As non-limiting examples, memory devices may have circuitry used for controlling and driving memory elements to accomplish functions such as programming and reading. This associated circuitry may be on the same substrate as the memory elements and/or on a separate substrate. For example, a controller for memory read-write operations may be located on a separate controller chip and/or on the same substrate as the memory elements.


The term “three-dimensional memory device” (or 3D memory device) is herein defined to mean a memory device having multiple memory layers or multiple levels (e.g., sometimes called multiple memory device levels) of memory elements, including any of the following: a memory device having a monolithic or non-monolithic 3D memory array, some non-limiting examples of which are described above; or two or more 2D and/or 3D memory devices, packaged together to form a stacked-chip memory device, some non-limiting examples of which are described above.


One of skill in the art will recognize that this invention is not limited to the two dimensional and three dimensional exemplary structures described but cover all relevant memory structures within the spirit and scope of the invention as described herein and as understood by one of skill in the art.


It will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first contact could be termed a second contact, and, similarly, a second contact could be termed a first contact, which changing the meaning of the description, so long as all occurrences of the “first contact” are renamed consistently and all occurrences of the second contact are renamed consistently. The first contact and the second contact are both contacts, but they are not the same contact.


The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the claims. As used in the description of the embodiments and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.


As used herein, the phrase “at least one of A, B and C” is to be construed to require one or more of the listed items, and this phase reads on a single instance of A alone, a single instance of B alone, or a single instance of C alone, while also encompassing combinations of the listed items such as “one or more of A and one or more of B without any of C,” and the like.


As used herein, the term “if” may be construed to mean “when” or “upon” or “in response to determining” or “in accordance with a determination” or “in response to detecting,” that a stated condition precedent is true, depending on the context. Similarly, the phrase “if it is determined [that a stated condition precedent is true]” or “if [a stated condition precedent is true]” or “when [a stated condition precedent is true]” may be construed to mean “upon determining” or “in response to determining” or “in accordance with a determination” or “upon detecting” or “in response to detecting” that the stated condition precedent is true, depending on the context.


The foregoing description, for purpose of explanation, has been described with reference to specific implementations. However, the illustrative discussions above are not intended to be exhaustive or to limit the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The implementations were chosen and described in order to best explain principles of operation and practical applications, to thereby enable others skilled in the art.

Claims
  • 1. A method, performed by a computer system having one or more processors and memory, the method comprising: detecting a request, received from a requestor, to access a first data object stored in a tiered data structure, the tiered data structure stored in one or more memory devices, wherein: the tiered data structure includes a plurality of internal nodes and a plurality of leaf nodes;two or more of the leaf nodes each include multiple data objects, each of the data objects including unique key information and a corresponding value; andthe first data object is uniquely identified by a first key that is specified by the request;in response to detecting the request to access the first data object: retrieving a leaf node that includes the first data object; andidentifying the first data object in the leaf node, including: combining unique key information of the first data object with a key prefix to generate a combined key, wherein the unique key information of the first data object and the key prefix are separated by other information in the leaf node; anddetermining that the combined key matches the first key that uniquely identifies the first data object; andafter identifying the first data object, providing access to the first data object to the requestor.
  • 2. The method of claim 1, wherein the key prefix for the first data object is stored as part of a second data object that is stored before the first data object in predefined order in the leaf node.
  • 3. The method of claim 1, wherein the key prefix for the first data object comprises a predefined portion of a key of a distinct second data object in the leaf node.
  • 4. The method of claim 1, wherein the data objects in the leaf node are sorted by key in a predefined key order.
  • 5. The method of claim 1, wherein: identifying the first data object further includes searching through the leaf node for the first data object by comparing the first key with a plurality of candidate keys for candidate data objects in the leaf node; anda respective candidate key for a respective candidate data object is generated by combining unique key information for the respective candidate data object with a corresponding key prefix for the respective candidate data object to generate the respective candidate key.
  • 6. The method of claim 1, wherein each respective data object of a plurality of the data objects in the leaf node, including the first data object, includes metadata that identifies a location of a key prefix for a key corresponding to the respective data object.
  • 7. The method of claim 6, wherein: first metadata for the first data object has a first length; andsecond metadata for a second data object in the plurality of data objects has a second length that is different from the first length.
  • 8. The method of claim 6, wherein: the leaf node includes a fixed length header for each of the plurality of data objects;for each of the plurality of data objects, the fixed length header includes information identifying a format of metadata included in the data object; anddifferent data objects in the plurality of data objects have different formats of metadata.
  • 9. The method of claim 1, wherein: the leaf node, as stored, is compressed; andthe method further comprises, after retrieving the leaf node and prior to identifying the first data object in the leaf node, decompressing the leaf node.
  • 10. The method of claim 1, further comprising: detecting a request to insert a new data object in the tiered data structure; andin response to detecting the request to insert the new data object in the tiered data structure: identifying a respective leaf node, of the plurality of leaf nodes in the tiered data structure, into which the new data object is to be inserted;identifying a position in the respective leaf node that is after a prior data object in the respective leaf node in a predefined order;determining a prefix for a key of the new data object based on a comparison between the key of the new data object and a key of the prior data object; andinserting the new data object into the respective leaf node along with an indication of a location in the leaf node of the determined prefix for the key of the new data object.
  • 11. The method of claim 1, further comprising: detecting a request to delete a respective data object in the leaf node that is before a subsequent data object in the leaf node, the respective data object having a key; andin response to detecting the request to delete the respective data object, and in accordance with a determination that the subsequent data object relies on a portion of the key of the respective data object as a key prefix for the subsequent data object, updating the subsequent data object so that metadata of the subsequent data object does not rely on the portion of the key of the respective data object as the key prefix for the subsequent data object.
  • 12. The method of claim 1, further comprising: detecting a request to update the first data object in the leaf node; andin response to detecting the request to update the first data object: updating a value of the first data object, wherein updating the value of the first data object changes a location of the key prefix for the first data object in the leaf node; andupdating the unique key information of the first data object to indicate the change in the location of the key prefix for the first data object.
  • 13. A computer system, comprising: one or more processors;memory; andone or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors, the one or more programs including instructions for: detecting a request, received from a requestor, to access a first data object stored in a tiered data structure, the tiered data structure stored in one or more memory devices, wherein: the tiered data structure includes a plurality of internal nodes and a plurality of leaf nodes;two or more of the leaf nodes each include multiple data objects, each of the data objects including unique key information and a corresponding value; andthe first data object is uniquely identified by a first key that is specified by the request;in response to detecting the request to access the first data object: retrieving a leaf node that includes the first data object; andidentifying the first data object in the leaf node, including: combining unique key information of the first data object with a key prefix to generate a combined key, wherein the unique key information of the first data object and the key prefix are separated by other information in the leaf node; anddetermining that the combined key matches the first key that uniquely identifies the first data object; andafter identifying the first data object, providing access to the first data object to the requestor.
  • 14. The computer system of claim 13, wherein the key prefix for the first data object is stored as part of a second data object that is stored before the first data object in predefined order in the leaf node.
  • 15. The computer system of claim 13, wherein: identifying the first data object further includes searching through the leaf node for the first data object by comparing the first key with a plurality of candidate keys for candidate data objects in the leaf node; anda respective candidate key for a respective candidate data object is generated by combining unique key information for the respective candidate data object with a corresponding key prefix for the respective candidate data object to generate the respective candidate key.
  • 16. The computer system of claim 13, wherein each respective data object of a plurality of the data objects in the leaf node, including the first data object, includes metadata that identifies a location of a key prefix for a key corresponding to the respective data object.
  • 17. The computer system of claim 13, wherein the one or more programs further include instructions for: detecting a request to insert a new data object in the tiered data structure; andin response to detecting the request to insert the new data object in the tiered data structure: identifying a respective leaf node, of the plurality of leaf nodes in the tiered data structure, into which the new data object is to be inserted;identifying a position in the respective leaf node that is after a prior data object in the respective leaf node in a predefined order;determining a prefix for a key of the new data object based on a comparison between the key of the new data object and a key of the prior data object; andinserting the new data object into the respective leaf node along with an indication of a location in the leaf node of the determined prefix for the key of the new data object.
  • 18. The computer system of claim 13, wherein the one or more programs further include instructions for: detecting a request to delete a respective data object in the leaf node that is before a subsequent data object in the leaf node, the respective data object having a key; andupdating the subsequent data object so that metadata of the subsequent data object does not rely on a portion of the key of the respective data object as a key prefix for the subsequent data object, in response to detecting the request to delete the respective data object, and in accordance with a determination that the subsequent data object relies on the portion of the key of the respective data object as the key prefix for the subsequent data object.
  • 19. The computer system of claim 13, wherein the one or more programs further include instructions for: detecting a request to update the first data object in the leaf node; andin response to detecting the request to update the first data object: updating a value of the first data object, wherein updating the value of the first data object changes a location of the key prefix for the first data object in the leaf node; andupdating the unique key information of the first data object to indicate the change in the location of the key prefix for the first data object.
  • 20. A non-transitory computer readable storage medium storing one or more programs, the one or more programs comprising instructions that, when executed by a computer system with one or more processors, cause the computer system to: detect a request, received from a requestor, to access a first data object stored in a tiered data structure, the tiered data structure stored in one or more memory devices, wherein: the tiered data structure includes a plurality of internal nodes and a plurality of leaf nodes;two or more of the leaf nodes each include multiple data objects, each of the data objects including unique key information and a corresponding value; andthe first data object is uniquely identified by a first key that is specified by the request;in response to detecting the request to access the first data object: retrieve a leaf node that includes the first data object; andidentify the first data object in the leaf node, including: combining unique key information of the first data object with a key prefix to generate a combined key, wherein the unique key information of the first data object and the key prefix are separated by other information in the leaf node; anddetermining that the combined key matches the first key that uniquely identifies the first data object; andafter identifying the first data object, provide access to the first data object to the requestor.
  • 21. The non-transitory computer readable storage medium of claim 20, wherein: identifying the first data object further includes searching through the leaf node for the first data object by comparing the first key with a plurality of candidate keys for candidate data objects in the leaf node; anda respective candidate key for a respective candidate data object is generated by combining unique key information for the respective candidate data object with a corresponding key prefix for the respective candidate data object to generate the respective candidate key.
  • 22. The non-transitory computer readable storage medium of claim 20, wherein each respective data object of a plurality of the data objects in the leaf node, including the first data object, includes metadata that identifies a location of a key prefix for a key corresponding to the respective data object.
  • 23. The non-transitory computer readable storage medium of claim 20, wherein the one or more programs, when executed by the one or more processors, further cause the computer system to: detect a request to insert a new data object in the tiered data structure; andin response to detecting the request to insert the new data object in the tiered data structure: identify a respective leaf node, of the plurality of leaf nodes in the tiered data structure, into which the new data object is to be inserted;identify a position in the respective leaf node that is after a prior data object in the respective leaf node in a predefined order;determine a prefix for a key of the new data object based on a comparison between the key of the new data object and a key of the prior data object; andinsert the new data object into the respective leaf node along with an indication of a location in the leaf node of the determined prefix for the key of the new data object.
  • 24. The non-transitory computer readable storage medium of claim 20, wherein the one or more programs, when executed by the one or more processors, further cause the computer system to: detect a request to delete a respective data object in the leaf node that is before a subsequent data object in the leaf node, the respective data object having a key; andupdate the subsequent data object so that metadata of the subsequent data object does not rely on a portion of the key of the respective data object as a key prefix for the subsequent data object, in response to detecting the request to delete the respective data object, and in accordance with a determination that the subsequent data object relies on the portion of the key of the respective data object as the key prefix for the subsequent data object.
  • 25. The non-transitory computer readable storage medium of claim 20, wherein the one or more programs, when executed by the one or more processors, further cause the computer system to: detect a request to update the first data object in the leaf node; andin response to detecting the request to update the first data object: update a value of the first data object, wherein updating the value of the first data object changes a location of the key prefix for the first data object in the leaf node; andupdate the unique key information of the first data object to indicate the change in the location of the key prefix for the first data object.
RELATED APPLICATIONS

This application claims priority to U.S. Provisional Application Ser. No. 61/973,177, filed Mar. 31, 2014, which is incorporated by reference herein in its entirety. This application is also related to U.S. Provisional Patent Application No. 61/973,170, filed Mar. 31, 2014, U.S. Provisional Patent Application No. 61/973,174, filed Mar. 31, 2014, U.S. patent application Ser. No. 14/336,931, filed Jul. 21, 2014, and U.S. patent application Ser. No. 14/336,949, filed Jul. 21, 2014, all of which are hereby incorporated by reference in their entireties.

US Referenced Citations (526)
Number Name Date Kind
4173737 Skerlos et al. Nov 1979 A
4888750 Kryder et al. Dec 1989 A
4916652 Schwarz et al. Apr 1990 A
5129089 Nielsen Jul 1992 A
5270979 Harari et al. Dec 1993 A
5329491 Brown et al. Jul 1994 A
5381528 Brunelle Jan 1995 A
5404485 Ban Apr 1995 A
5488702 Byers et al. Jan 1996 A
5519847 Fandrich et al. May 1996 A
5530705 Malone Jun 1996 A
5537555 Landry Jul 1996 A
5551003 Mattson et al. Aug 1996 A
5636342 Jeffries Jun 1997 A
5657332 Auclair et al. Aug 1997 A
5666114 Brodie et al. Sep 1997 A
5708849 Coke et al. Jan 1998 A
5765185 Lambrache et al. Jun 1998 A
5890193 Chevallier Mar 1999 A
5930188 Roohparvar Jul 1999 A
5936884 Hasbun et al. Aug 1999 A
5943692 Marberg et al. Aug 1999 A
5946714 Miyauchi Aug 1999 A
5982664 Watanabe Nov 1999 A
6000006 Bruce et al. Dec 1999 A
6006345 Berry, Jr. Dec 1999 A
6016560 Wada et al. Jan 2000 A
6018304 Bessios Jan 2000 A
6044472 Crohas Mar 2000 A
6070074 Perahia et al. May 2000 A
6104304 Clark et al. Aug 2000 A
6119250 Nishimura et al. Sep 2000 A
6138261 Wilcoxson et al. Oct 2000 A
6182264 Ott Jan 2001 B1
6192092 Dizon et al. Feb 2001 B1
6260120 Blumenau et al. Jul 2001 B1
6295592 Jeddeloh et al. Sep 2001 B1
6311263 Barlow et al. Oct 2001 B1
6408394 Vander Kamp et al. Jun 2002 B1
6412042 Paterson et al. Jun 2002 B1
6442076 Roohparvar Aug 2002 B1
6449625 Wang Sep 2002 B1
6484224 Robins et al. Nov 2002 B1
6516437 Van Stralen et al. Feb 2003 B1
6564285 Mills et al. May 2003 B1
6647387 McKean et al. Nov 2003 B1
6678788 O'Connell Jan 2004 B1
6728879 Atkinson Apr 2004 B1
6757768 Potter et al. Jun 2004 B1
6775792 Ulrich et al. Aug 2004 B2
6810440 Micalizzi, Jr. et al. Oct 2004 B2
6836808 Bunce et al. Dec 2004 B2
6836815 Purcell et al. Dec 2004 B1
6842436 Moeller Jan 2005 B2
6865650 Morley et al. Mar 2005 B1
6871257 Conley et al. Mar 2005 B2
6895464 Chow et al. May 2005 B2
6934755 Saulpaugh et al. Aug 2005 B1
6966006 Pacheco et al. Nov 2005 B2
6978343 Ichiriu Dec 2005 B1
6980985 Amer-Yahia et al. Dec 2005 B1
6981205 Fukushima et al. Dec 2005 B2
6988171 Beardsley et al. Jan 2006 B2
7020017 Chen et al. Mar 2006 B2
7024514 Mukaida et al. Apr 2006 B2
7028165 Roth et al. Apr 2006 B2
7032123 Kane et al. Apr 2006 B2
7043505 Teague et al. May 2006 B1
7076598 Wang Jul 2006 B2
7100002 Shrader et al. Aug 2006 B2
7102860 Wenzel Sep 2006 B2
7111293 Hersh et al. Sep 2006 B1
7126873 See et al. Oct 2006 B2
7133282 Sone Nov 2006 B2
7155579 Neils et al. Dec 2006 B1
7162678 Saliba Jan 2007 B2
7173852 Gorobets et al. Feb 2007 B2
7184446 Rashid et al. Feb 2007 B2
7212440 Gorobets May 2007 B2
7269755 Moshayedi et al. Sep 2007 B2
7275170 Suzuki Sep 2007 B2
7295479 Yoon et al. Nov 2007 B2
7328377 Lewis et al. Feb 2008 B1
7426633 Thompson et al. Sep 2008 B2
7486561 Mokhlesi Feb 2009 B2
7516292 Kimura et al. Apr 2009 B2
7523157 Aguilar, Jr. et al. Apr 2009 B2
7527466 Simmons May 2009 B2
7529466 Takahashi May 2009 B2
7533214 Aasheim et al. May 2009 B2
7546478 Kubo et al. Jun 2009 B2
7566987 Black et al. Jul 2009 B2
7571277 Mizushima Aug 2009 B2
7574554 Tanaka et al. Aug 2009 B2
7596643 Merry et al. Sep 2009 B2
7669003 Sinclair et al. Feb 2010 B2
7681106 Jarrar et al. Mar 2010 B2
7685494 Varnica et al. Mar 2010 B1
7707481 Kirschner et al. Apr 2010 B2
7761655 Mizushima et al. Jul 2010 B2
7765454 Passint Jul 2010 B2
7774390 Shin Aug 2010 B2
7809836 Mihm et al. Oct 2010 B2
7840762 Oh et al. Nov 2010 B2
7870326 Shin et al. Jan 2011 B2
7890818 Kong et al. Feb 2011 B2
7913022 Baxter Mar 2011 B1
7925960 Ho et al. Apr 2011 B2
7934052 Prins et al. Apr 2011 B2
7945825 Cohen et al. May 2011 B2
7954041 Hong et al. May 2011 B2
7971112 Murata Jun 2011 B2
7974368 Shieh et al. Jul 2011 B2
7978516 Olbrich Jul 2011 B2
7996642 Smith Aug 2011 B1
8006161 Lestable et al. Aug 2011 B2
8032724 Smith Oct 2011 B1
8041884 Chang Oct 2011 B2
8042011 Nicolaidis et al. Oct 2011 B2
8069390 Lin Nov 2011 B2
8190967 Hong et al. May 2012 B2
8250380 Guyot Aug 2012 B2
8254181 Hwang et al. Aug 2012 B2
8259506 Sommer et al. Sep 2012 B1
8261020 Krishnaprasad Sep 2012 B2
8312349 Reche et al. Nov 2012 B2
8385117 Sakurada et al. Feb 2013 B2
8412985 Bowers et al. Apr 2013 B1
8429436 Fillingim et al. Apr 2013 B2
8438459 Cho et al. May 2013 B2
8453022 Katz May 2013 B2
8473680 Pruthi Jun 2013 B1
8510499 Banerjee Aug 2013 B1
8531888 Chilappagari et al. Sep 2013 B2
8554984 Yano et al. Oct 2013 B2
8627117 Johnston Jan 2014 B2
8634248 Sprouse et al. Jan 2014 B1
8694854 Dar et al. Apr 2014 B1
8724789 Altberg et al. May 2014 B2
8775741 de la Iglesia Jul 2014 B1
8788778 Boyle Jul 2014 B1
8832384 de la Iglesia Sep 2014 B1
8874992 Desireddi et al. Oct 2014 B2
8885434 Kumar Nov 2014 B2
8898373 Kang et al. Nov 2014 B1
8909894 Singh et al. Dec 2014 B1
8910030 Goel Dec 2014 B2
8923066 Subramanian et al. Dec 2014 B1
9043517 Sprouse et al. May 2015 B1
9128690 Lotzenburger et al. Sep 2015 B2
9329789 Chu et al. May 2016 B1
20010026949 Ogawa et al. Oct 2001 A1
20010050824 Buch Dec 2001 A1
20020024846 Kawahara et al. Feb 2002 A1
20020032891 Yada et al. Mar 2002 A1
20020036515 Eldridge et al. Mar 2002 A1
20020083299 Van Huben et al. Jun 2002 A1
20020099904 Conley Jul 2002 A1
20020116651 Beckert et al. Aug 2002 A1
20020122334 Lee et al. Sep 2002 A1
20020152305 Jackson et al. Oct 2002 A1
20020162075 Talagala et al. Oct 2002 A1
20020165896 Kim Nov 2002 A1
20030041299 Kanazawa et al. Feb 2003 A1
20030043829 Rashid Mar 2003 A1
20030079172 Yamagishi et al. Apr 2003 A1
20030088805 Majni et al. May 2003 A1
20030093628 Matter et al. May 2003 A1
20030163594 Aasheim et al. Aug 2003 A1
20030163629 Conley et al. Aug 2003 A1
20030188045 Jacobson Oct 2003 A1
20030189856 Cho et al. Oct 2003 A1
20030198100 Matsushita et al. Oct 2003 A1
20030204341 Guliani et al. Oct 2003 A1
20030212719 Yasuda et al. Nov 2003 A1
20030225961 Chow et al. Dec 2003 A1
20040024957 Lin et al. Feb 2004 A1
20040024963 Talagala et al. Feb 2004 A1
20040057575 Zhang et al. Mar 2004 A1
20040062157 Kawabe Apr 2004 A1
20040073829 Olarig Apr 2004 A1
20040085849 Myoung et al. May 2004 A1
20040114265 Talbert Jun 2004 A1
20040143710 Walmsley Jul 2004 A1
20040148561 Shen et al. Jul 2004 A1
20040153902 Machado et al. Aug 2004 A1
20040158775 Shibuya et al. Aug 2004 A1
20040167898 Margolus et al. Aug 2004 A1
20040181734 Saliba Sep 2004 A1
20040199714 Estakhri et al. Oct 2004 A1
20040210706 In et al. Oct 2004 A1
20040237018 Riley Nov 2004 A1
20050060456 Shrader et al. Mar 2005 A1
20050060501 Shrader Mar 2005 A1
20050073884 Gonzalez et al. Apr 2005 A1
20050108588 Yuan May 2005 A1
20050114587 Chou et al. May 2005 A1
20050138442 Keller, Jr. et al. Jun 2005 A1
20050144358 Conley et al. Jun 2005 A1
20050144361 Gonzalez et al. Jun 2005 A1
20050144367 Sinclair Jun 2005 A1
20050144516 Gonzalez et al. Jun 2005 A1
20050154825 Fair Jul 2005 A1
20050172065 Keays Aug 2005 A1
20050172207 Radke et al. Aug 2005 A1
20050193161 Lee et al. Sep 2005 A1
20050201148 Chen et al. Sep 2005 A1
20050210348 Totsuka Sep 2005 A1
20050231765 So et al. Oct 2005 A1
20050249013 Janzen et al. Nov 2005 A1
20050251617 Sinclair et al. Nov 2005 A1
20050257120 Gorobets et al. Nov 2005 A1
20050273560 Hulbert et al. Dec 2005 A1
20050281088 Ishidoshiro et al. Dec 2005 A1
20050289314 Adusumilli et al. Dec 2005 A1
20060010174 Nguyen et al. Jan 2006 A1
20060039196 Gorobets et al. Feb 2006 A1
20060039227 Lai et al. Feb 2006 A1
20060053246 Lee Mar 2006 A1
20060062054 Hamilton et al. Mar 2006 A1
20060069932 Oshikawa et al. Mar 2006 A1
20060085671 Majni et al. Apr 2006 A1
20060087893 Nishihara et al. Apr 2006 A1
20060103480 Moon et al. May 2006 A1
20060107181 Dave et al. May 2006 A1
20060136570 Pandya Jun 2006 A1
20060136655 Gorobets et al. Jun 2006 A1
20060136681 Jain et al. Jun 2006 A1
20060156177 Kottapalli et al. Jul 2006 A1
20060184738 Bridges et al. Aug 2006 A1
20060195650 Su et al. Aug 2006 A1
20060209592 Li et al. Sep 2006 A1
20060224841 Terai et al. Oct 2006 A1
20060244049 Yaoi et al. Nov 2006 A1
20060259528 Dussud et al. Nov 2006 A1
20060265568 Burton Nov 2006 A1
20060291301 Ziegelmayer Dec 2006 A1
20070011413 Nonaka et al. Jan 2007 A1
20070033376 Sinclair et al. Feb 2007 A1
20070058446 Hwang et al. Mar 2007 A1
20070061597 Holtzman et al. Mar 2007 A1
20070076479 Kim et al. Apr 2007 A1
20070081408 Kwon et al. Apr 2007 A1
20070083697 Birrell et al. Apr 2007 A1
20070088716 Brumme et al. Apr 2007 A1
20070091677 Lasser et al. Apr 2007 A1
20070101096 Gorobets May 2007 A1
20070106679 Perrin et al. May 2007 A1
20070113019 Beukema May 2007 A1
20070133312 Roohparvar Jun 2007 A1
20070147113 Mokhlesi et al. Jun 2007 A1
20070150790 Gross et al. Jun 2007 A1
20070156842 Vermeulen et al. Jul 2007 A1
20070157064 Falik et al. Jul 2007 A1
20070174579 Shin Jul 2007 A1
20070180188 Fujbayashi et al. Aug 2007 A1
20070180346 Murin Aug 2007 A1
20070191993 Wyatt Aug 2007 A1
20070201274 Yu et al. Aug 2007 A1
20070204128 Lee et al. Aug 2007 A1
20070208901 Purcell et al. Sep 2007 A1
20070234143 Kim Oct 2007 A1
20070245061 Harriman Oct 2007 A1
20070245099 Gray et al. Oct 2007 A1
20070263442 Cornwall et al. Nov 2007 A1
20070268754 Lee et al. Nov 2007 A1
20070277036 Chamberlain et al. Nov 2007 A1
20070279988 Nguyen Dec 2007 A1
20070291556 Kamei Dec 2007 A1
20070294496 Goss et al. Dec 2007 A1
20070300130 Gorobets Dec 2007 A1
20080013390 Zipprich-Rasch Jan 2008 A1
20080019182 Yanagidaira et al. Jan 2008 A1
20080022163 Tanaka et al. Jan 2008 A1
20080028275 Chen et al. Jan 2008 A1
20080043871 Latouche et al. Feb 2008 A1
20080052446 Lasser et al. Feb 2008 A1
20080052451 Pua et al. Feb 2008 A1
20080056005 Aritome Mar 2008 A1
20080059602 Matsuda et al. Mar 2008 A1
20080071971 Kim et al. Mar 2008 A1
20080077841 Gonzalez et al. Mar 2008 A1
20080077937 Shin et al. Mar 2008 A1
20080086677 Yang et al. Apr 2008 A1
20080112226 Mokhlesi May 2008 A1
20080141043 Flynn et al. Jun 2008 A1
20080144371 Yeh et al. Jun 2008 A1
20080147714 Breternitz et al. Jun 2008 A1
20080147964 Chow et al. Jun 2008 A1
20080147998 Jeong Jun 2008 A1
20080148124 Zhang et al. Jun 2008 A1
20080163030 Lee Jul 2008 A1
20080168191 Biran et al. Jul 2008 A1
20080168319 Lee et al. Jul 2008 A1
20080170460 Oh et al. Jul 2008 A1
20080180084 Dougherty et al. Jul 2008 A1
20080209282 Lee et al. Aug 2008 A1
20080229000 Kim Sep 2008 A1
20080229003 Mizushima et al. Sep 2008 A1
20080229176 Arnez et al. Sep 2008 A1
20080270680 Chang Oct 2008 A1
20080282128 Lee et al. Nov 2008 A1
20080285351 Shlick et al. Nov 2008 A1
20080313132 Hao et al. Dec 2008 A1
20080320110 Pathak Dec 2008 A1
20090003046 Nirschl et al. Jan 2009 A1
20090003058 Kang Jan 2009 A1
20090019216 Yamada et al. Jan 2009 A1
20090031083 Willis et al. Jan 2009 A1
20090037652 Yu et al. Feb 2009 A1
20090070608 Kobayashi Mar 2009 A1
20090116283 Ha et al. May 2009 A1
20090125671 Flynn et al. May 2009 A1
20090144598 Yoon et al. Jun 2009 A1
20090158288 Fulton et al. Jun 2009 A1
20090168525 Olbrich et al. Jul 2009 A1
20090172258 Olbrich et al. Jul 2009 A1
20090172259 Prins et al. Jul 2009 A1
20090172260 Olbrich et al. Jul 2009 A1
20090172261 Prins et al. Jul 2009 A1
20090172262 Olbrich et al. Jul 2009 A1
20090172308 Prins et al. Jul 2009 A1
20090172335 Kulkarni et al. Jul 2009 A1
20090172499 Olbrich et al. Jul 2009 A1
20090193058 Reid Jul 2009 A1
20090204823 Giordano et al. Aug 2009 A1
20090207660 Hwang et al. Aug 2009 A1
20090213649 Takahashi et al. Aug 2009 A1
20090222708 Yamaga Sep 2009 A1
20090228761 Perlmutter et al. Sep 2009 A1
20090235128 Eun et al. Sep 2009 A1
20090249160 Gao et al. Oct 2009 A1
20090251962 Yun et al. Oct 2009 A1
20090268521 Ueno et al. Oct 2009 A1
20090292972 Seol et al. Nov 2009 A1
20090296466 Kim et al. Dec 2009 A1
20090296486 Kim et al. Dec 2009 A1
20090310422 Edahiro et al. Dec 2009 A1
20090319864 Shrader Dec 2009 A1
20100002506 Cho et al. Jan 2010 A1
20100008175 Sweere et al. Jan 2010 A1
20100011261 Cagno et al. Jan 2010 A1
20100020620 Kim et al. Jan 2010 A1
20100037012 Yano et al. Feb 2010 A1
20100054034 Furuta et al. Mar 2010 A1
20100061151 Miwa et al. Mar 2010 A1
20100091535 Sommer et al. Apr 2010 A1
20100103737 Park Apr 2010 A1
20100110798 Hoei et al. May 2010 A1
20100115206 de la Iglesia et al. May 2010 A1
20100118608 Song et al. May 2010 A1
20100138592 Cheon Jun 2010 A1
20100153616 Garratt Jun 2010 A1
20100161936 Royer et al. Jun 2010 A1
20100174959 No et al. Jul 2010 A1
20100185807 Meng et al. Jul 2010 A1
20100199027 Pucheral et al. Aug 2010 A1
20100199125 Reche Aug 2010 A1
20100199138 Rho Aug 2010 A1
20100202196 Lee et al. Aug 2010 A1
20100202239 Moshayedi et al. Aug 2010 A1
20100208521 Kim et al. Aug 2010 A1
20100257379 Wang et al. Oct 2010 A1
20100262889 Bains Oct 2010 A1
20100281207 Miller et al. Nov 2010 A1
20100281342 Chang et al. Nov 2010 A1
20100306222 Freedman et al. Dec 2010 A1
20100332858 Trantham et al. Dec 2010 A1
20100332863 Johnston Dec 2010 A1
20110010514 Benhase et al. Jan 2011 A1
20110022779 Lund et al. Jan 2011 A1
20110022819 Post et al. Jan 2011 A1
20110051513 Shen et al. Mar 2011 A1
20110066597 Mashtizadeh et al. Mar 2011 A1
20110066806 Chhugani et al. Mar 2011 A1
20110072207 Jin et al. Mar 2011 A1
20110072302 Sartore Mar 2011 A1
20110078407 Lewis Mar 2011 A1
20110078496 Jeddeloh Mar 2011 A1
20110083060 Sakurada et al. Apr 2011 A1
20110099460 Dusija et al. Apr 2011 A1
20110113281 Zhang et al. May 2011 A1
20110122691 Sprouse May 2011 A1
20110131444 Buch et al. Jun 2011 A1
20110138260 Savin Jun 2011 A1
20110173378 Filor et al. Jul 2011 A1
20110179249 Hsiao Jul 2011 A1
20110199825 Han et al. Aug 2011 A1
20110205823 Hemink et al. Aug 2011 A1
20110213920 Frost et al. Sep 2011 A1
20110222342 Yoon et al. Sep 2011 A1
20110225346 Goss et al. Sep 2011 A1
20110225347 Goss et al. Sep 2011 A1
20110228601 Olbrich et al. Sep 2011 A1
20110231600 Tanaka et al. Sep 2011 A1
20110239077 Bai et al. Sep 2011 A1
20110264843 Haines et al. Oct 2011 A1
20110271040 Kamizono Nov 2011 A1
20110283119 Szu et al. Nov 2011 A1
20110289125 Guthery Nov 2011 A1
20110320733 Sanford et al. Dec 2011 A1
20120011393 Roberts et al. Jan 2012 A1
20120017053 Yang et al. Jan 2012 A1
20120023144 Rub Jan 2012 A1
20120026799 Lee Feb 2012 A1
20120054414 Tsai et al. Mar 2012 A1
20120063234 Shiga et al. Mar 2012 A1
20120072639 Goss et al. Mar 2012 A1
20120096217 Son et al. Apr 2012 A1
20120110250 Sabbag et al. May 2012 A1
20120117317 Sheffler May 2012 A1
20120117397 Kolvick et al. May 2012 A1
20120124273 Goss et al. May 2012 A1
20120131286 Faith et al. May 2012 A1
20120151124 Baek et al. Jun 2012 A1
20120151253 Horn Jun 2012 A1
20120151294 Yoo et al. Jun 2012 A1
20120173797 Shen Jul 2012 A1
20120173826 Takaku Jul 2012 A1
20120185750 Hayami Jul 2012 A1
20120195126 Roohparvar Aug 2012 A1
20120203804 Burka et al. Aug 2012 A1
20120203951 Wood et al. Aug 2012 A1
20120210095 Nellans et al. Aug 2012 A1
20120216079 Fai et al. Aug 2012 A1
20120233391 Frost et al. Sep 2012 A1
20120236658 Byom et al. Sep 2012 A1
20120239858 Melik-Martirosian Sep 2012 A1
20120239868 Ryan et al. Sep 2012 A1
20120239976 Cometti et al. Sep 2012 A1
20120246204 Nalla et al. Sep 2012 A1
20120259863 Bodwin et al. Oct 2012 A1
20120275466 Bhadra et al. Nov 2012 A1
20120278564 Goss et al. Nov 2012 A1
20120284574 Avila et al. Nov 2012 A1
20120284587 Yu et al. Nov 2012 A1
20120297122 Gorobets Nov 2012 A1
20130007073 Varma Jan 2013 A1
20130007343 Rub et al. Jan 2013 A1
20130007381 Palmer Jan 2013 A1
20130007543 Goss et al. Jan 2013 A1
20130024735 Chung et al. Jan 2013 A1
20130031438 Hu et al. Jan 2013 A1
20130036418 Yadappanavar et al. Feb 2013 A1
20130038380 Cordero et al. Feb 2013 A1
20130047045 Hu et al. Feb 2013 A1
20130058145 Yu et al. Mar 2013 A1
20130070527 Sabbag et al. Mar 2013 A1
20130073784 Ng et al. Mar 2013 A1
20130073798 Kang et al. Mar 2013 A1
20130073924 D'Abreu et al. Mar 2013 A1
20130079942 Smola et al. Mar 2013 A1
20130086131 Hunt et al. Apr 2013 A1
20130086132 Hunt et al. Apr 2013 A1
20130094288 Patapoutian et al. Apr 2013 A1
20130103978 Akutsu Apr 2013 A1
20130110891 Ogasawara et al. May 2013 A1
20130111279 Jeon et al. May 2013 A1
20130111298 Seroff et al. May 2013 A1
20130117606 Anholt et al. May 2013 A1
20130121084 Jeon et al. May 2013 A1
20130124792 Melik-Martirosian et al. May 2013 A1
20130124888 Tanaka et al. May 2013 A1
20130128666 Avila et al. May 2013 A1
20130132647 Melik-Martirosian May 2013 A1
20130132652 Wood et al. May 2013 A1
20130159609 Haas et al. Jun 2013 A1
20130176784 Cometti et al. Jul 2013 A1
20130179646 Okubo et al. Jul 2013 A1
20130191601 Peterson et al. Jul 2013 A1
20130194865 Bandic et al. Aug 2013 A1
20130194874 Mu et al. Aug 2013 A1
20130232289 Zhong et al. Sep 2013 A1
20130238576 Binkert et al. Sep 2013 A1
20130254498 Adachi et al. Sep 2013 A1
20130254507 Islam et al. Sep 2013 A1
20130258738 Barkon et al. Oct 2013 A1
20130265838 Li Oct 2013 A1
20130282955 Parker et al. Oct 2013 A1
20130290611 Biederman et al. Oct 2013 A1
20130297613 Yu Nov 2013 A1
20130301373 Tam Nov 2013 A1
20130304980 Nachimuthu et al. Nov 2013 A1
20130314988 Desireddi et al. Nov 2013 A1
20130343131 Wu et al. Dec 2013 A1
20130346672 Sengupta et al. Dec 2013 A1
20140013027 Jannyavula Venkata et al. Jan 2014 A1
20140013188 Wu et al. Jan 2014 A1
20140025864 Zhang et al. Jan 2014 A1
20140032890 Lee et al. Jan 2014 A1
20140063905 Ahn et al. Mar 2014 A1
20140067761 Chakrabarti et al. Mar 2014 A1
20140071761 Sharon et al. Mar 2014 A1
20140075133 Li et al. Mar 2014 A1
20140082261 Cohen et al. Mar 2014 A1
20140082310 Nakajima Mar 2014 A1
20140082459 Li et al. Mar 2014 A1
20140095775 Talagala et al. Apr 2014 A1
20140101389 Nellans et al. Apr 2014 A1
20140115238 Xi et al. Apr 2014 A1
20140122818 Hayasaka et al. May 2014 A1
20140122907 Johnston May 2014 A1
20140136762 Li et al. May 2014 A1
20140136883 Cohen May 2014 A1
20140136927 Li et al. May 2014 A1
20140143505 Sim et al. May 2014 A1
20140153333 Avila et al. Jun 2014 A1
20140157065 Ong Jun 2014 A1
20140173224 Fleischer et al. Jun 2014 A1
20140181458 Loh et al. Jun 2014 A1
20140201596 Baum et al. Jul 2014 A1
20140223084 Lee et al. Aug 2014 A1
20140244578 Winkelstraeter Aug 2014 A1
20140258755 Stenfort Sep 2014 A1
20140269090 Flynn et al. Sep 2014 A1
20140310494 Higgins et al. Oct 2014 A1
20140359044 Davis et al. Dec 2014 A1
20140359381 Takeuchi et al. Dec 2014 A1
20150023097 Khoueir et al. Jan 2015 A1
20150032967 Udayashankar et al. Jan 2015 A1
20150037624 Thompson et al. Feb 2015 A1
20150153799 Lucas et al. Jun 2015 A1
20150153802 Lucas et al. Jun 2015 A1
20150212943 Yang et al. Jul 2015 A1
20150268879 Chu Sep 2015 A1
20150286438 Simionescu et al. Oct 2015 A1
Foreign Referenced Citations (17)
Number Date Country
1 299 800 Apr 2003 EP
1 465 203 Oct 2004 EP
1 990 921 Nov 2008 EP
2 386 958 Nov 2011 EP
2 620 946 Jul 2013 EP
2002-532806 Oct 2002 JP
WO 2007036834 Apr 2007 WO
WO 2007080586 Jul 2007 WO
WO 2008075292 Jun 2008 WO
WO 2008121553 Oct 2008 WO
WO 2008121577 Oct 2008 WO
WO 2009028281 Mar 2009 WO
WO 2009032945 Mar 2009 WO
WO 2009058140 May 2009 WO
WO 2009084724 Jul 2009 WO
WO 2009134576 Nov 2009 WO
WO 2011024015 Mar 2011 WO
Non-Patent Literature Citations (68)
Entry
Barr, “Introduction to Watchdog Timers,” Oct. 2001, 3 pgs.
Canim, “Buffered Bloom Filters on Solid State Storage,” ADMS*10, Singapore, Sep. 13-17, 2010, 8 pgs.
Kang, “A Multi-Channel Architecture for High-Performance NAND Flash-Based Storage System,” J. Syst. Archit., vol. 53, Issue 9, Sep. 2007, 15 pgs.
Kim, “A Space-Efficient Flash Translation Layer for CompactFlash Systems,” May 2002, IEEE vol. 48, No. 2, 10 pgs.
Lu, “A Forest-structured Bloom Filter with Flash Memory,” MSST 2011, Denver, CO, May 23-27, 2011, article, 6 pgs.
Lu, “A Forest-structured Bloom Filter with Flash Memory,” MSST 2011, Denver, CO, May 23-27, 2011, presentation slides, 25 pgs.
McLean, “Information Technology—AT Attachment with Packet Interface Extension,” Aug. 19, 1998, 339 pgs.
Microchip Technology, “Section 10. Watchdog Timer and Power-Saving Modes,” 2005, 14 pages.
Park et al., “A High Performance Controller for NAND Flash-Based Solid State Disk (NSSD),” Proceedings of Non-Volatile Semiconductor Memory Workshop, Feb. 2006, 4 pgs.
Zeidman, 1999 Verilog Designer's Library, 9 pgs.
International Search Report and Written Opinion dated Jun. 6, 2013, received in International Patent Application No. PCT/US2012/059447, which corresponds to U.S. Appl. No. 13/602,031, 12 pgs (Tai).
International Search Report and Written Opinion dated May 23, 2013, received in International Patent Application No. PCT/US2012/065914, which corresponds to U.S. Appl. No. 13/679,963, 7 pgs (Frayer).
International Search Report and Written Opinion, dated Mar. 19, 2009 received in International Patent Application No. PCT/US08/88133, which corresponds to U.S. Appl. No. 12/082,202, 7 pgs (Prins).
International Search Report and Written Opinion dated Feb. 19, 2009, received in International Patent Application No. PCT/US08/88236, which corresponds to U.S. Appl. No. 12/082,203, 7 pgs (Olbrich).
International Search Report and Written Opinion dated Feb. 19, 2009, received in International Patent Application No. PCT/US08/88217, which corresponds to U.S. Appl. No. 12/082,204, 7 pgs (Olbrich).
International Search Report and Written Opinion, dated Mar. 19, 2009, received in International Patent Application No. PCT/US08/88136, which corresponds to U.S. Appl. No. 12/082,205, 7 pgs (Olbrich).
International Search Report and Written Opinion dated Feb. 18, 2009, received in International Patent Application No. PCT/US08/88206, which corresponds to U.S. Appl. No. 12/082,206, 7 pgs (Prins).
International Search Report and Written Opinion dated Feb. 27, 2009, received in International Patent Application No. PCT/US2008/088154, which corresponds to U.S. Appl. No. 12/082,207, 8 pgs (Prins).
European Search Report dated Feb. 23, 2012, received in European Patent Application No. 08866997.3, which corresponds to U.S. Appl. No. 12/082,207, 6 pgs (Prins).
Office Action dated Apr. 18, 2012, received in Chinese Patent Application No. 200880127623.8, which corresponds to U.S. Appl. No. 12/082,207, 12 pgs (Prins).
Office Action dated Dec. 31, 2012, received in Chinese Patent Application No. 200880127623.8, which corresponds to U.S. Appl. No. 12/082,207, 9 pgs (Prins).
Notification of the Decision to Grant a Patent Right for Patent for Invention dated Jul. 4, 2013, received in Chinese Patent Application No. 200880127623.8, which corresponds to U.S. Appl. No. 12/082,207, 1 pg (Prins).
Office Action dated Jul. 24, 2012, received in Japanese Patent Application No. JP 2010-540863, 3 pgs (Prins).
International Search Report and Written Opinion dated Feb. 13, 2009, received in International Patent Application No. PCT/US08/88164, which corresponds to U.S. Appl. No. 12/082,220, 6 pgs (Olbrich).
International Search Report and Written Opinion dated Feb. 26, 2009, received in International Patent Application No. PCT/US08/88146, which corresponds to U.S. Appl. No. 12/082,221, 10 pgs (Prins).
International Search Report and Written Opinion dated Feb. 19, 2009, received in International Patent Application No. PCT/US08/88232, which corresponds to U.S. Appl. No. 12/082,222, 8 pgs (Olbrich).
International Search Report and Written Opinion dated Feb. 13, 2009, received in International Patent Application No. PCT/US08/88229, which corresponds to U.S. Appl. No. 12/082,223, 7 pgs (Olbrich).
International Search Report and Written Opinion dated Oct. 27, 2011, received in International Patent Application No. PCT/US2011/028637, which corresponds to U.S. Appl. No. 12/726,200, 13 pgs (Olbrich).
International Search Report and Written Opinion dated Aug. 31, 2012, received in International Patent Application PCT/US2012/042764, which corresponds to U.S. Appl. No. 13/285,873, 12 pgs (Frayer).
International Search Report and Written Opinion dated Mar. 4, 2013, received in PCT/US2012/042771, which corresponds to U.S. Appl. No. 13/286,012, 14 pgs (Stonelake).
International Search Report and Written Opinion dated Sep. 26, 2012, received in International Patent Application No. PCT/US2012/042775, which corresponds to U.S. Appl. No. 13/285,892, 8 pgs (Weston-Lewis et al.).
International Search Report and Written Opinion dated Jun. 6, 2013, received in International Patent Application No. PCT/US2012/059453, which corresponds to U.S. Appl. No. 13/602,039, 12 pgs (Frayer).
International Search Report and Written Opinion dated Feb. 14, 2013, received in International Patent Application No. PCT/US2012/059459, which corresponds to U.S. Appl. No. 13/602,047, 9 pgs (Tai).
International Search Report and Written Opinion dated Mar. 7, 2014, received in International Patent Application No. PCT/US2013/074772, which corresponds to U.S. Appl. No. 13/831,218, 10 pages (George).
International Search Report and Written Opinion dated Mar. 24, 2014, received in International Patent Application No. PCT/US2013/074777, which corresponds to U.S. Appl. No. 13/831,308, 10 pages (George).
International Search Report and Written Opinion dated Mar. 7, 2014, received in International Patent Application No. PCT/US2013/074779, which corresponds to U.S. Appl. No. 13/831,374, 8 pages (George).
International Search Report and Written Opinion dated Apr. 5, 2013, received in International Patent Application No. PCT/US2012/065916, which corresponds to U.S. Appl. No. 13/679,969, 7 pgs (Frayer).
International Search Report and Written Opinion dated Jun. 17, 2013, received in International Patent Application No. PCT/US2012/065919, which corresponds to U.S. Appl. No. 13/679,970, 8 pgs (Frayer).
International Search Report and Written Opinion dated Jun. 30, 2015, received in International Patent Application No. PCT/US2015/023927, which corresponds to U.S. Appl. No. 14/454,687, 11 pages (Kadayam).
International Search Report and Written Opinion dated Jul. 23, 2015, received in International Patent Application No. PCT/US2015/030850, which corresponds to U.S. Appl. No. 14/298,843, 12 pages (Ellis).
Office Action dated Dec. 8, 2014, received in Chinese Patent Application No. 201180021660.2, which corresponds to U.S. Appl. No. 12/726,200, 7 pages (Olbrich).
Office Action dated Jul. 31, 2015, received in Chinese Patent Application No. 201180021660.2, which corresponds to U.S. Appl. No. 12/726,200, 9 pages (Olbrich).
International Search Report and Written Opinion dated Sep. 14, 2015, received in International Patent Application No. PCT/US2015/036807, which corresponds to U.S. Appl. No. 14/311,152, 9 pages (Higgins).
Bayer, “Prefix B-Trees”, IP.COM Journal, IP.COM Inc., West Henrietta, NY, Mar. 30, 2007, 29 pages.
Bhattacharjee et al., “Efficient Index Compression in DB2 LUW”, IBM Research Report, Jun. 23, 2009, http://domino.research.ibm.com/library/cyberdig.nsf/papers/40B2C45876D0D747852575E100620CE7/$File/rc24815.pdf, 13 pages.
Lee et al., “A Semi-Preemptive Garbage Collector for Solid State Drives,” Apr. 2011, IEEE, pp. 12-21.
Oracle, “Oracle9i: Database Concepts”, Jul. 2001, http://docs.oracle.com/cd/A91202—01/901—doc/server.901/a88856.pdf, 49 pages.
Office Action dated Feb. 17, 2015, received in Chinese Patent Application No. 201210334987.1, which corresponds to U.S. Appl. No. 12/082,207, 9 pages (Prins).
International Search Report and Written Opinion dated May 4, 2015, received in International Patent Application No. PCT/US2014/065987, which corresponds to U.S. Appl. No. 14/135,400, 12 pages (George).
International Search Report and Written Opinion dated Mar. 17, 2015, received in International Patent Application No. PCT/US2014/067467, which corresponds to U.S. Appl. No. 14/135,420, 13 pages (Lucas).
International Search Report and Written Opinion dated Apr. 20, 2015, received in International Patent Application No. PCT/US2014/063949, which corresponds to U.S. Appl. No. 14/135,433, 21 pages (Delpapa).
International Search Report and Written Opinion dated Jun. 8, 2015, received in International Patent Application No. PCT/US2015/018252, which corresponds to U.S. Appl. No. 14/339,072, 9 pages (Busch).
International Search Report and Written Opinion dated Jun. 2, 2015, received in International Patent Application No. PCT/US2015/018255, which corresponds to U.S. Appl. No. 14/336,967, 14 pages (Chander).
Ashkenazi et al., “Platform independent overall security architecture in multi-processor system-on-chip integrated circuits for use in mobile phones and handheld devices,” ScienceDirect, Computers and Electrical Engineering 33 (2007), 18 pages.
Invitation to Pay Additional Fees dated Feb. 13, 2015, received in International Patent Application No. PCT/US2014/063949, which corresponds to U.S. Appl. No. 14/135,433, 6 pages (Delopapa).
International Search Report and Written Opinion dated Mar. 9, 2015, received in International Patent Application No. PCT/US2014/059747, which corresponds to U.S. Appl. No. 14/137,440, 9 pages (Fitzpatrick).
International Search Report and Written Opinion dated Jan. 21, 2015, received in International Patent Application No. PCT/US2014/059748, which corresponds to U.S. Appl. No. 14/137,511, 13 pages (Dancho).
International Search Report and Written Opinion dated Feb. 18, 2015, received in International Patent Application No. PCT/US2014/066921, which corresponds to U.S. Appl. No. 14/135,260, 13 pages (Fitzpatrick).
International Search Report and Written Opinion dated Jul. 25, 2014, received in International Patent Application No. PCT/US2014/029453, which corresponds to U.S. Appl. No. 13/963,444, 9 pages (Frayer).
IBM Research—Zurich, “The Fundamental Limit of Flash Random Write Performance: Understanding, Analysis and Performance Modeling,” Mar. 31, 2010, pp. 1-15.
Gasior, “Gigabyte's i-Ram storage device, Ram disk without the fuss,” The Tech Report, p. 1, Jan. 25, 2006, 5 pages.
Oestreicher et al., “Object Lifetimes in Java Card,” 1999, USENIX, 10 pages.
Office Action dated Apr. 25, 2016, received in Chinese Patent Application No. 201280066282.4, which corresponds to U.S. Appl. No. 13/602,047, 8 pages (Tai).
International Preliminary Report on Patentability dated May 24, 2016, received in International Patent Application No. PCT/US2014/065987, which corresponds to U.S. Appl. No. 14/135,400, 9 pages (George).
Wei et al.; WAFTL: A Workload Adaptive Flash Translation Layer with Data Partition, Data Storage Institute, A-STAR, Singapore, National University of Singapore, May 26, 2011, 12 pgs.
Gupta et al.; DFTL: A Flash Translation Layer Employing Demand-based Selective Caching of Page-Level Address Mappings, ACM, New York, NY, USA 2009, 12 pgs.
International Preliminary Report on Patentability dated Dec. 6, 2016, received in International Patent Application No. PCT/US2015/030850, which corresponds to U.S. Appl. No. 14/298,843, 8 pages (Ellis).
International Preliminary Report on Patentability dated Dec. 20, 2016, received in International Patent Application No. PCT/US2015/036807, which corresponds to U.S. Appl. No. 14/311,152, 6 pages (Higgins).
Related Publications (1)
Number Date Country
20150278271 A1 Oct 2015 US
Provisional Applications (1)
Number Date Country
61973177 Mar 2014 US