This invention relates generally to a storage system and more particularly to distributed garbage collection in a distributed storage system.
Enterprise storage systems currently available are proprietary storage appliances that integrate the storage controller functions and the storage media into the same physical unit. This centralized model makes it harder to independently scale the storage systems' capacity, performance and cost. Users can get tied to one expensive appliance without the flexibility of adapting it to different application requirements that may change over time. For small and medium scale enterprise, this may require huge upfront capital cost. For larger enterprise datacenters, new storage appliances are added as the storage capacity and performance requirements increase. These operate in silos and impose significant management overheads.
These storage systems either build storage systems as in-place filesystem (where data being overwritten in place), log-structured (where data being written is redirected to a new location) or copy on write (where the data is written in place, but a copy of the original data is written to new location). In all of these approaches, cleaning of up data to reclaim space, that was generated either by invalidation of old data by new writes or user triggered deletes, poses a challenging problem.
In addition, storage systems build a reference counting mechanism to track data accessible by the user. Whenever a data block or segment reaches a reference count of 0, it becomes a viable candidate for reclamation. That approach is efficient on a single node where there is no requirement to coordinate the reference count on a datablock. However, this mechanism becomes a challenge in a distributed multi-node environment.
A distributed garbage collection in a distributed storage system is described, where the storage controller functions of the distributed storage system are separated from that of distributed storage system storage media. In an exemplary embodiment, a storage controller server generates a live object map of live objects stored on the distributed storage system in a plurality of block segments distributed across a plurality of storage controller servers. The storage controller server further scans the plurality of block segments to generate segment summary statistics, where the segment summary statistics indicates the number of live objects stored in the plurality of block segments. In addition, the storage controller server compacts each of the plurality of block segments that have a low utilization based on the segment summary statistics. Furthermore, the live object map is a probabilistic data structure storing a list of valid objects.
Other methods and apparatuses are also described.
The present invention is illustrated by way of example and not limitation in the figures of the accompanying drawings in which like references indicate similar elements.
A distributed garbage collection in a distributed storage system is described, where the storage controller functions of the distributed storage system are separated from that of distributed storage system storage media. In the following description, numerous specific details are set forth to provide thorough explanation of embodiments of the present invention. It will be apparent, however, to one skilled in the art, that embodiments of the present invention may be practiced without these specific details. In other instances, well-known components, structures, and techniques have not been shown in detail in order not to obscure the understanding of this description.
Reference in the specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the invention. The appearances of the phrase “in one embodiment” in various places in the specification do not necessarily all refer to the same embodiment.
In the following description and claims, the terms “coupled” and “connected,” along with their derivatives, may be used. It should be understood that these terms are not intended as synonyms for each other. “Coupled” is used to indicate that two or more elements, which may or may not be in direct physical or electrical contact with each other, co-operate or interact with each other. “Connected” is used to indicate the establishment of communication between two or more elements that are coupled with each other.
The processes depicted in the figures that follow, are performed by processing logic that comprises hardware (e.g., circuitry, dedicated logic, etc.), software (such as is run on a general-purpose computer system or a dedicated machine), or a combination of both. Although the processes are described below in terms of some sequential operations, it should be appreciated that some of the operations described may be performed in different order. Moreover, some operations may be performed in parallel rather than sequentially.
The terms “server,” “client,” and “device” are intended to refer generally to data processing systems rather than specifically to a particular form factor for the server, client, and/or device.
A distributed garbage collection in a distributed storage system is described, where the storage controller functions of the distributed storage system are separated from that of distributed storage system storage media. In one embodiment, the StorFS system writes incoming data to a new location on the persistent storage. This means that when an object or a logical offset within a file gets overwritten, the old object or file content needs to be removed (e.g. cleaned or garbage collected.) Because of this, the StorFS system needs to determine a list of valid (e.g., live) objects. Due to the presence of snapshots, clones and deduplication, multiple files, or file systems may reference the same data object. In one embodiment, the StorFS system traverses the metadata tree to compile a list of valid data objects, which form the leaf of the metadata tree. Because a cluster may contain billions of valid objects, this list can be gigantic and would not fit in the main memory.
To address this issue, and in one embodiment, the StorFS system uses a space-efficient probabilistic data structure to store the list of valid objects. In one embodiment, the space-efficient probabilistic data structure is space-efficient approximate membership data-structure such as bloom filter or quotient filter. For example and in one embodiment, a bloom filter can be used to store the list of valid objects. The bloom filter permits membership tests with very little memory compared to the number of object entries it stores. The garbage collection process involves generating a live object map, cleaning the segments, and compacting the segments. In one embodiment, the live object map generation includes traversing the metadata tree and populating the bloom filter. In one embodiment, cleaning the segments includes scanning the segments to check the number of live objects contained in the segments and generating a segment summary statistic. In one embodiment, compacting the segments includes compacting segments with low utilization based on segment summary statistics.
In one embodiment, the design of the StorFS system 100 distributes both the data and the metadata, and this system 100 does not require storing a complete global map for locating individual data blocks in our system. The responsibility of managing metadata is offloaded to each individual storage nodes 102A-C. In one embodiment, a cluster manager (CRM) resides on each SC Server 110 maintains some global metadata, which is small compared to the local metadata. In one embodiment, each logical file (or entity) is partitioned into equal sized “stripe units”. The location of a stripe unit is determined based on a mathematical placement function Equation (1):
The EntityId is an identification of a storage entity that is to be operated upon, the Total_Virtual_Nodes is the total number of virtual nodes in the StorFS system 100, the offset is an offset into the storage entity, and the Stripe_Unit_Size is the size of each stripe unit in the StorFS system 100. The value Stripe_Unit_Per_Stripe is described further below. In one embodiment, the storage entity is data that is stored in the StorFS system 100. For example and in one embodiment, the storage entity could be a file, an object, key-value pair, etc. In this example, the EntityId can be an iNode value, a file descriptor, an object identifier, key/value identifier, etc. In one embodiment, an input to a storage operation is the EntityId and the offset (e.g., a write, read, query, create, delete, etc. operations). In this embodiment, the EntityId is a globally unique identification.
In one embodiment, the StorFS 100 system receives the EntityId and offset as input for each requested storage operation from an application 106A-C. In this embodiment, the StorFS system 100 uses the offset to compute a stripe unit number, Stripe_Unit#, based on the stripe unit size, Stripe_Unit_Size, and the number of virtual nodes that the entity can be spread across, Stripe_Unit_Per_Stripe. Using the stripe unit number and the entity identifier (EntityId), the StorFS system 100 computes the virtual node identifier. As described below, the StorFS system 100 uses a hash function to compute the virtual node identifier. With the virtual node identifier, the StorFS 100 can identify which physical node the storage entity is associated with and can route the request to the corresponding SC server 110A-C.
In one embodiment, each vNode is a collection of either one or more data or metadata objects. In one embodiment, the StorFS system 100 does not store data and metadata in the same virtual node. This is because data and metadata may have different access patterns and quality of service (QoS) requirements. In one embodiment, a vNode does not span across two devices (e.g. a HDD). A single storage disk of a storage node 102A-C may contain multiple vNodes. In one embodiment, the placement function uses that a deterministic hashing function and that has good uniformity over the total number of virtual nodes. A hashing function as known in the art can be used (e.g., Jenkins hash, murmur hash, etc.). In one embodiment, the “Stripe_Unit_Per_Stripe” attribute determines the number of total virtual nodes that an entity can be spread across. This enables distributing and parallelizing the workload across multiple storage nodes (e.g., multiple SC servers 110A-C). In one embodiment, the StorFS system 100 uses a two-level indexing scheme that maps the logical address (e.g. offset within a file or an object) to a virtual block address (VBA) and from the VBAs to physical block address (PBA). In one embodiment, the VBAs are prefixed by the ID of the vNode in which they are stored. This vNode identifier (ID) is used by the SC client and other StorFS system 100 components to route the I/O to the correct cluster node. The physical location on the disk is determined based on the second index, which is local to a physical node. In one embodiment, a VBA is unique across the StorFS cluster, where no two objects in the cluster will have the same VBA.
In one embodiment, the cluster manager (CRM) maintains a database of virtual node (vNode) to physical node (pNode) mapping. In this embodiment, each SC client and server caches the above mapping and computes the location of a particular data block using the above function in Equation (1). In this embodiment, the cluster manager need not be consulted for every I/O. Instead, the cluster manager is notified if there is any change in ‘vNode’ to ‘pNode’ mapping, which may happen due to node/disk failure, load balancing, etc. This allows the StorFS system to scale up and parallelize/distribute the workload to many different storage nodes. In addition, this provides a more deterministic routing behavior and quality of service. By distributing I/Os across different storage nodes, the workloads can take advantage of the caches in each of those nodes, thereby providing higher combined performance. Even if the application migrates (e.g. a virtual machine migrates in a virtualized environment), the routing logic can fetch the data from the appropriate storage nodes. Since the placement is done at the stripe unit granularity, access to data within a particular stripe unit goes to the same physical node. Access to two different stripe units may land in different physical nodes. The striping can be configured at different level (e.g. file, volume, etc.) Depending on the application settings, the size of a stripe unit can range from a few megabytes to a few hundred megabytes. In one embodiment, this can provide a good balance between fragmentation (for sequential file access) and load distribution.
In one embodiment, the garbage collection module includes a live object map 312, segment cleaner 314, and segment compactor 316. In one embodiment, the live object map 312 is a map of the live objects stored in the StorFS system. In one embodiment, the garbage collector 304 further includes live object map full module 318 that builds a full live object map as described in
In one embodiment, the StorFS system writes incoming data to a new location on the persistent storage. This means that when an object or a logical offset within a file gets overwritten, the old object or file content needs to be removed (e.g. cleaned or garbage collected.) Because of this, the StorFS system needs to determine a list of valid (e.g., live) objects. Due to the presence of snapshots, clones and deduplication, multiple files, or file systems may reference the same data object. In one embodiment, the StorFS system traverses the metadata tree to compile a list of valid data objects, which form the leaf of the metadata tree. Because a cluster may contain billions of valid objects, this list can be gigantic and would not fit in the main memory. To address this issue, the StorFS system uses a space-efficient probabilistic data structure to store the list of valid objects. For example and in one embodiment, a bloom filter can be used to store the list of valid objects. The bloom filter permits membership tests with very little memory compared to the number of object entries it stores. The garbage collection process involves generating a live object map, cleaning the segments, and compacting the segments. In one embodiment, the live object map generation includes traversing the metadata tree and populating the bloom filter. In one embodiment, cleaning the segments includes scanning the segments to check the number of live objects contained in the segments and generating a segment summary statistic. In one embodiment, compacting the segments includes compacting segments with low utilization based on segment summary statistics.
At block 354, process 350 scans the segments to check the number of live objects contained in the segments and generates a segment summary statistic. In one embodiment, once the live object map has been created, the process 350 scans the block segments to check the number of live objects they contain and generates a segment summary statistic of the utilization of the segments, (e.g. the ratio of number of the live-objects to the total number of objects in the segment). In one embodiment, the summary statistics can be built in two ways: (1) by a complete block segment scan and (2) using and in essential statistics update. In one embodiment, the complete block segment scan operates by having process 350 iterates over all the segments to check for valid objects. This approach can have more overhead because of a large number of I/Os the approach generates.
In one embodiment, in the incremental summary statistics update, process 350 uses the dead object hint log generated by the ‘Write Log Flusher’. The objects in this log are tested for their membership in the live object map. If the test is negative, process 350 confirms that the object is actually dead. In addition, process 350 updates its segment summary statistics by decrementing the utilization.
Process 350 compacts the segments with low utilization at block 356. In one embodiment, compact segmentation compacts segments whose utilization drops below a certain threshold. In this embodiment, each segment can contain four types of entities: Live objects, dead objects, object tombstone entry, and segment tombstone entry. These entities are dealt as follows during compaction:
In one embodiment, metadata traversal can cause a lot of random I/Os that can lead to very high overhead in HDD. File metadata, for example, is organized in a hierarchical tree format to locate data blocks on the persistent storage. In one embodiment, the StorFS system employs a smart traversal algorithm that efficiently reads the whole metadata tree with very little random I/O to the HDD. If there is sufficient space on the faster flash drive to hold the metadata vNode, the StorFS system copies the entire metadata vNode onto the faster flash drive. The random read I/Os during the traversal are sent to faster flash drives, which have much higher random read performance compared to HDD. In addition, the StorFS system caches the top few levels of the metadata tree in the DRAM to further expedite the tree traversal.
The live object map can be generated for the complete metadata tree using the algorithm shown in
Process 400 instantiates a bloom filter to manage the data keys at block 406. In one embodiment, this bloom filter is called the D-Bloom. At block 408, process 400 instantiates a bloom filter to manage and clean the Metadata vNode, which are metadata keys. In one embodiment, this metadata bloom filter is called the M-bloom. In one embodiment, the size of the M-bloom is up to an order of magnitude smaller that the D-bloom bloom filter. In one embodiment, this bloom filter is a space-efficient probabilistic data structure that is used to test whether an object is a member of a live object map. Process 400 creates a cache to store intermediate metadata blocks for file tree traversal at block 410.
At block 412, process 400 obtains the head segment ID of the Snapshot being cleaned and the head segment ID of the last cleaned snapshot. The later is referred to as the tail segment. Process 400 fetches the current list of blocks being deduplicated from the deduplication module 602. In one embodiment, for each entry in the deduplication list with reference count greater than zero that reside between Head Segment and Tail Segment, process 400 adds this entry to the D-Bloom list. In addition, at block 414, process 400 removes those entries from the Deduplication Module whose reference count is equal to zero. For example and in one embodiment, the Deduplication Module is the Deduplication Module 302 as described in
In one embodiment, this complete metadata traversal, however, comes at a cost. In one embodiment, instead of walking the whole tree each time, StorFS implements an approach to incrementally clean the segments written in a particular time duration (e.g. in a day.) Consider the scenario as illustrated in
At block 510, process 500 creates a cache to store intermediate metadata blocks for filetree traversal. Process 500 obtains a head segment ID and the generation number of the snapshot being cleaned now at block 512. In one embodiment, the head segment ID and the generation number are recorded along with other snapshot attributes when the snapshot is written to the persistent storage. Process 500 obtains the head segment ID and the generation number from the snapshot attributes. In addition, process 500 obtains the head segment ID and the generation number of the last cleaned snapshot. In one embodiment, the head segment ID of the last cleaned snapshot is called the tail segment.
At block 514, process 500 fetches the current list of blocks being deduplicated. In one embodiment, process 500 fetches this list from the deduplication module. In one embodiment, at block 514, process 500 further, for each entry in the deduplication list with a reference count greater than zero that resides between Head Segment and Tail Segment, adds that entry to the D-Bloom list. In addition, process 500, for each entry with a reference count that is equal to zero, removes that entries. In one embodiment, process 500 removes the entry by sending a request to the deduplication module.
Process 500 traverse those portions of the metadata tree with a generation number that is between the head generation number and the tail generation number. During the traverse, is the entry is a metadata key, process 500 adds that entry to the M-Bloom bloom filter. If the entry is data key, process 500 adds that entry to the D-Bloom bloom filter.
In one embodiment, the incremental cleaner cleanses this last category of the segments/objects. In order to achieve this, the StorFS system stores a “generation number” in the metadata tree for each object it writes. The incremental cleaner traverses those portions of the metadata tree whose generation number that is greater than the last generation number it cleaned and populated the live object map. In this embodiment, the segment cleaner cleanses those segments that were written during that time interval. In one embodiment, a system administrator can configure when an incremental full cleaners are ran. For example and in one embodiment, a system administrator may set policies to run the incremental cleaner on weekdays and full cleaner on weekends.
As described above, the garbage collection module 304 compacts segments that have low utilization.
At block 712, process 700 verifies the number of live keys in the live object map. In one embodiment, the live object map is a probabilistic data structure such as a bloom filter as described above. In one embodiment, process 700 verifies the number of live keys in the M-Bloom or D-Bloom filter. For example and in one embodiment, if the vNode is a metadata vNode, process 700 verifies the number of live keys in the M-Bloom filter for this metadata vNode. As another example and embodiment, if the vNode is a data vNode, process 700 verifies the number of live keys in the D-Bloom filter for this data vNode. In one embodiment, process 700 verifies the number of live keys by querying the live object map for membership of each key. If the key is present process 700 counts the key as live and increments the count. If not, process 700 determines the key are dead. With the number of live keys determined, process 700 determines if the number of live keys is less than a threshold at block 714. In one embodiment, the threshold is the number of live keys that represents a low utilization for that segment. If a segment has a low utilization, the objects for that segment can be moved to another segment and this segment can be deleted. If the number of live keys is greater than or equal to the threshold, process 700 proceeds to block 700 to analyze another segment. If the number of live keys is less than the threshold, process deletes the segment at block 716. In one embodiment, process 700 deletes the segment by moving the keys for this segment forward, adding a tombstone for the remaining keys, and deleting the segment. In one embodiment, by deleting the segment, the storage taken up by this segment is freed and can be used for other segments and/or storage.
In one embodiment, each segment can contain four types of entities: Live objects, dead objects, object tombstone entry, and segment tombstone entry. Process 700 handles these entities as follows at block 716: tombstone entries are discarded and need not be carried forward; live objects are copy forwarded to the new segments and the corresponding index for that object is updated; a tombstone entry is added for each dead objects; and a segment tombstone is added for the segment being compacted. Execution proceeds to block 702
As shown in
The mass storage 1111 is typically a magnetic hard drive or a magnetic optical drive or an optical drive or a DVD RAM or a flash memory or other types of memory systems, which maintain data (e.g. large amounts of data) even after power is removed from the system. Typically, the mass storage 1111 will also be a random access memory although this is not required. While
Portions of what was described above may be implemented with logic circuitry such as a dedicated logic circuit or with a microcontroller or other form of processing core that executes program code instructions. Thus processes taught by the discussion above may be performed with program code such as machine-executable instructions that cause a machine that executes these instructions to perform certain functions. In this context, a “machine” may be a machine that converts intermediate form (or “abstract”) instructions into processor specific instructions (e.g., an abstract execution environment such as a “process virtual machine” (e.g., a Java Virtual Machine), an interpreter, a Common Language Runtime, a high-level language virtual machine, etc.), and/or, electronic circuitry disposed on a semiconductor chip (e.g., “logic circuitry” implemented with transistors) designed to execute instructions such as a general-purpose processor and/or a special-purpose processor. Processes taught by the discussion above may also be performed by (in the alternative to a machine or in combination with a machine) electronic circuitry designed to perform the processes (or a portion thereof) without the execution of program code.
The present invention also relates to an apparatus for performing the operations described herein. This apparatus may be specially constructed for the required purpose, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer. Such a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, and magnetic-optical disks, read-only memories (ROMs), RAMs, EPROMs, EEPROMs, magnetic or optical cards, or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
A machine readable medium includes any mechanism for storing or transmitting information in a form readable by a machine (e.g., a computer). For example, a machine readable medium includes read only memory (“ROM”); random access memory (“RAM”); magnetic disk storage media; optical storage media; flash memory devices; etc.
An article of manufacture may be used to store program code. An article of manufacture that stores program code may be embodied as, but is not limited to, one or more memories (e.g., one or more flash memories, random access memories (static, dynamic or other)), optical disks, CD-ROMs, DVD ROMs, EPROMs, EEPROMs, magnetic or optical cards or other type of machine-readable media suitable for storing electronic instructions. Program code may also be downloaded from a remote computer (e.g., a server) to a requesting computer (e.g., a client) by way of data signals embodied in a propagation medium (e.g., via a communication link (e.g., a network connection)).
The preceding detailed descriptions are presented in terms of algorithms and symbolic representations of operations on data bits within a computer memory. These algorithmic descriptions and representations are the tools used by those skilled in the data processing arts to most effectively convey the substance of their work to others skilled in the art. An algorithm is here, and generally, conceived to be a self-consistent sequence of operations leading to a desired result. The operations are those requiring physical manipulations of physical quantities. Usually, though not necessarily, these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers, or the like.
It should be kept in mind, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities. Unless specifically stated otherwise as apparent from the above discussion, it is appreciated that throughout the description, discussions utilizing terms such as “receiving,” “determining,” “transmitting,” “computing,” “detecting,” “performing,” “generating,” “communicating,” “reading,” “writing,” “transferring,” “updating,” “scanning,” “compacting,” or the like, refer to the action and processes of a computer system, or similar electronic computing device, that manipulates and transforms data represented as physical (electronic) quantities within the computer system's registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
The processes and displays presented herein are not inherently related to any particular computer or other apparatus. Various general-purpose systems may be used with programs in accordance with the teachings herein, or it may prove convenient to construct a more specialized apparatus to perform the operations described. The required structure for a variety of these systems will be evident from the description below. In addition, the present invention is not described with reference to any particular programming language. It will be appreciated that a variety of programming languages may be used to implement the teachings of the invention as described herein.
The foregoing discussion merely describes some exemplary embodiments of the present invention. One skilled in the art will readily recognize from such discussion, the accompanying drawings and the claims that various modifications can be made without departing from the spirit and scope of the invention.
Applicant claims the benefit of priority of prior, provisional application Ser. No. 61/739,685, filed Dec. 19, 2012, the entirety of which is incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
5692185 | Nilsen | Nov 1997 | A |
6247139 | Walker et al. | Jun 2001 | B1 |
6338117 | Challenger | Jan 2002 | B1 |
6928526 | Zhu et al. | Aug 2005 | B1 |
7065619 | Zhu et al. | Jun 2006 | B1 |
7194492 | Seidenberg | Mar 2007 | B2 |
7246211 | Beloussov | Jul 2007 | B1 |
7395378 | Pendharkar | Jul 2008 | B1 |
7467265 | Tawri | Dec 2008 | B1 |
7584338 | Bricker et al. | Sep 2009 | B1 |
7757202 | Dahlstedt | Jul 2010 | B2 |
7953774 | Cong | May 2011 | B2 |
8190823 | Waltermann | May 2012 | B2 |
8429162 | Wang et al. | Apr 2013 | B1 |
8589640 | Colgrove | Nov 2013 | B2 |
8768977 | Golab | Jul 2014 | B2 |
8935302 | Flynn | Jan 2015 | B2 |
9098201 | Benjamin | Aug 2015 | B2 |
9110792 | Douglis | Aug 2015 | B1 |
9201794 | Gill | Dec 2015 | B2 |
9251021 | Calder et al. | Feb 2016 | B2 |
20010052073 | Kern et al. | Dec 2001 | A1 |
20030014599 | McBreatry et al. | Jan 2003 | A1 |
20030189930 | Terrell et al. | Oct 2003 | A1 |
20040098424 | Seidenberg | May 2004 | A1 |
20050114402 | Guthrie | May 2005 | A1 |
20050193272 | Stager | Sep 2005 | A1 |
20050268054 | Werner et al. | Dec 2005 | A1 |
20080109624 | Gilbert | May 2008 | A1 |
20090292746 | Bricker et al. | Nov 2009 | A1 |
20100070715 | Waltermann | Mar 2010 | A1 |
20100082550 | Cong | Apr 2010 | A1 |
20100191783 | Mason | Jul 2010 | A1 |
20100198795 | Chen | Aug 2010 | A1 |
20110196900 | Drobychev | Aug 2011 | A1 |
20110225214 | Guo | Sep 2011 | A1 |
20110258480 | Young et al. | Oct 2011 | A1 |
20110282842 | Popovski | Nov 2011 | A1 |
20120047111 | Hayden | Feb 2012 | A1 |
20120210095 | Nellans | Aug 2012 | A1 |
20120278512 | Alatorre et al. | Nov 2012 | A1 |
20120297142 | Gill | Nov 2012 | A1 |
20120303577 | Calder | Nov 2012 | A1 |
20120331249 | Benjamin | Dec 2012 | A1 |
20130097380 | Colgrove | Apr 2013 | A1 |
20130297569 | Hyde, II | Nov 2013 | A1 |
20140040199 | Golab | Feb 2014 | A1 |
20140122795 | Chambliss | May 2014 | A1 |
20150039717 | Chiu et al. | Feb 2015 | A1 |
Entry |
---|
Shaoshan Liu et al., Parker: Parallel Garbage Collerctor Based on Virtual Spaces, Computers, IEEE Transactions on Year: 2012, vol. 6, Issue: 11, pp. 1611-1623, DOI: 10.1109/TC.2011.193. |
“The Case for Persistent Full Clones,” Deepstorage.net, http://getgreenbytes.com/wp-content/uploads/2013/05/Full—Clone—Persistent—VDI-Final.pdf, 18 pages. |
Rodeh, Ohad, “B-trees, Shadowing, and Clones,” ACM Transactions on Storage (TOS) 3, No. 4, https://www.usenix.org/legacy/events/Isf07/tech/rodeh.pdf, (2008), 51 pages. |
Rodeh, Ohad, “B-trees, Shadowing, and Clones,” ACM Transactions on Computational Logic, vol. V, No. N, (Aug. 2007), 26 pages. |
Benjamin Zhu, Kai Lai, Hugo Patterson, “Avoiding the Disk Bottleneck in the Data Domain Deduplication File System”, http://usenix.org/legacy/events/fast08/tech/full—papers/zhu/zh—html/index . . . USENIX Fast 2008, Feb. 2008, 16 pages. |
VMWare Virtual SAN Hardware Guidance—VMWare, Jun. 2009 https://www.vmware.com/files/pdf/products/vsan/VMware-TMD-Virtual-SAN-Hardware-Guidance.pdf. |
Giuseppe Decandia et al., Dynamo: Amazon's Highly Available Key-value Store, http://www.allthingsdistributed.com/files/amazon-dynamo-sosp2007.pdf, SOSP'07, Oct. 14-17, 2007, pp. 205-220, Stevenson, Washington, USA, Amazon.com. |
Avinash Lakshman et al.,Cassandra—A Decentralized Structured Storage System, http://www.cs.cornell.edu/projects/ladis2009/, Oct. 10, 2009, 6 pages. |
John S. Heidemann et al., File-System Development With Stackable Layers, https://www.ece.cmu.edu/˜ganger/712.fall02/papers/stackableFS-Heidemann94.pdf, ACM Transactions on Computer Systems, vol. 12, No. 1 Feb. 1994 pp. 58-89. |
Number | Date | Country | |
---|---|---|---|
61739685 | Dec 2012 | US |