This invention relates to creating snapshots of a storage volume.
In many contexts, it is helpful to be able to return a database to an original state or some intermediate state. In this manner, changes to software or other database configuration parameters may be tested without fear of corrupting critical data.
The systems and methods disclosed herein provide an improved approach for creating snapshots of a database and returning to a previous snapshot.
In order that the advantages of the invention will be readily understood, a more particular description of the invention briefly described above will be rendered by reference to specific embodiments illustrated in the appended drawings. Understanding that these drawings depict only typical embodiments of the invention and are not therefore to be considered limiting of its scope, the invention will be described and explained with additional specificity and detail through use of the accompanying drawings, in which:
Referring to
One or more compute nodes 110 are also coupled to the network 104 and host user applications that generate read and write requests with respect to storage volumes managed by the storage manager 102 and stored within the memory devices 108 of the storage nodes 108.
The methods disclosed herein ascribe certain functions to the storage manager 102, storage nodes 106, and compute node 110. The methods disclosed herein are particularly useful for large scale deployment including large amounts of data distributed over many storage nodes 106 and accessed by many compute nodes 110. However, the methods disclosed herein may also be implemented using a single computer implementing the functions ascribed herein to some or all of the storage manager 102, storage nodes 106, and compute node 110.
Referring to
The method 200 includes receiving, by the storage manager 102 a request to create a new snapshot for a storage volume. A storage volume as referred to herein may be a virtual storage volume that may divided into individual slices. For example, storage volumes as described herein may be 1 TB and be divided into 1 GB slices. In general, a slice and its snapshot are stored on a single storage node 106, whereas a storage volume may have the slices thereof stored by multiple storage nodes 106.
The request received at step 202 may be received from a human operator or generated automatically, such as according to backup scheduler executing on the storage manager 102 or some other computing device. The subsequent steps of the method 200 may be executed in response to receiving 202 the request
The method 200 may include transmitting 204 a quiesce instruction to all compute nodes 110 that are associated with the storage volume. For example, all compute nodes 110 that have pending write requests to the storage volume. In some embodiments, the storage manager 102 may store a mapping of compute nodes 110 to a particular storage volume used by the compute nodes 110. Accordingly, step 204 may include sending 204 the quiesce instruction to all of these compute nodes. Alternatively, the instruction may be transmitted 204 to all compute nodes 110 and include an identifier of the storage volume. The compute nodes 110 may then suppress any write instructions referencing that storage volume.
The quiesce instruction instructs the compute nodes 110 that receive it to suppress 206 transmitting write requests to the storage nodes 106 for the storage volume referenced by the quiesce instruction. The quiesce instruction may further cause the compute nodes 110 that receive it to report 208 to the storage manager 102 when no write requests are pending for that storage volume, i.e. all write requests issued to one or more storage nodes 106 and referencing slices of that storage volume have been acknowledged by the one or more storage nodes 106.
In response to receiving the report of step 208 from one or more compute nodes, e.g. all compute nodes that are mapped to the storage node that is the subject of the snapshot request of step 202, the storage manager 102 transmits 210 an instruction to the storage nodes 106 associated with the storage volume to create a new snapshot of that storage volume. Step 210 may further include transmitting 210 an instruction to the compute nodes 110 associated with the storage volume to commence issuing write commands to the storage nodes 106 associated with the storage volume. In some embodiments, the instruction of step 110 may include an identifier of the new snapshot. Accordingly, subsequent input/output operations (IOPs) transmitted 214 from the compute nodes may reference that snapshot identifier. Likewise, the storage node 106 may associate the snapshot identifier with data subsequently written to the storage volume, as described in greater detail below.
In response to receiving 210 the instruction to create a new snapshot, each storage node 106 finalizes 212 segments associated with the current snapshot, which may include performing garbage collection, as described in greater detail below. In addition, subsequent IOPs received by the storage node may also be processed 216 using the new snapshot as the current snapshot, as is also described in greater detail below.
Referring to
For each logical volume, the storage manager 102 may store and maintain a volume map 300. For each slice in the logical volume, the volume map may include an entry including a node identifier 302 identifying the storage node 106 to which the slice is assigned and an offset 304 within the logical volume at which the slice begins. In some embodiments, slices are assigned both to a storage node 106 and a specific storage device hosted by the storage node 106. Accordingly, the entry may further include a disk identifier of the storage node 106 referencing the specific storage device to which the slice is assigned.
The remaining data structures of
In some embodiments, an entry in the slice map 308 is created for a slice of the logical volume only after a write request is received that references the offset 304 for that slice. This further supports the implementation of overprovisioning such that slices may be assigned to a storage node 106 in excess of its actual capacity since the slice is only tied up in the slice map 308 when it is actually used.
The storage node 106 may further store and maintain a segment map 314. The segment map 314 includes entries either including or corresponding to a particular physical segment identifier (PSID) 316. For example, the segment map 314 may be in an area of memory such that each address in that area corresponds to one PSID 316 such that the entry does not actually need to include the PSID 316. The entries of the segment map 314 may further include a slice identifier 310 that identifies a local slice of the storage node 106 to which the PSID 316 has been assigned. The entry may further include a virtual segment identifier (VSID) 318. As described in greater detail below, each time a segment is assigned to logical volume and a slice of a logical volume, it may be assigned a VSID 318 such that the VSIDs 318 increase in value monotonically in order of assignment. In this manner, the most recent PSID 316 assigned to a logical volume and slice of a logical volume may easily be determined by the magnitude of the VSIDs 318 mapped to the PSIDs 316. In some embodiments, VSIDs 318 are assigned in a monotonically increasing series for all segments assigned to volume ID 312. In other embodiments, each offset 304 and its corresponding slice ID 310 is assigned VSIDs separately, such that each slice ID 310 has its own corresponding series of monotonically increasing VSIDs 318 assigned to segments allocated to that slice ID 310.
The entries of the segment map 314 may further include a data offset 320 for the PSID 316 of that entry. As described in greater detail below, when data is written to a segment it may be written at a first open position from a first end of the segment. Accordingly, the data offset 320 may indicate the location of this first open position in the segment. The data offset 320 for a segment may therefore be updated each time data is written to the segment to indicate where the new first open position is.
The entries of the segment map 314 may further include a metadata offset 322. As described in detail below, for each write request written to a segment, a metadata entry may be stored in that segment at a first open position from a second end of the segment opposite the first end. Accordingly, the metadata offset 322 in an entry of the segment map 314 may indicate a location of this first open position of the segment corresponding to the entry.
Each PSID 316 corresponds to a physical segment 324 on a device hosted by the storage node 106. As shown, data payloads 326 from various write requests are written to the physical segment 324 starting from a first end (left) of the physical segment. The physical segment may further store index pages 328 such that index pages are written starting from a second end (right) of the physical segment 324.
Each index page 328 may include a header 330. The header 330 may be coded data that enables identification of a start of an index page 328. The entries of the index page 328 each correspond to one of the data payloads 326 and are written in the same order as the data payloads 326. Each entry may include a logical block address (LBA) 332. The LBA 332 indicates an offset within the logical volume to which the data payload corresponds. The LBA 332 may indicate an offset within a slice of the logical volume. For example, inasmuch as the PSID 316 is mapped to a slice ID 310 that is mapped to an offset 304 within a particular volume ID 312, maps 308 and 314, and an LBA 332 within the slice may be mapped to the corresponding offset 304 to obtain a fully resolved address within the logical volume.
In some embodiments, the entries of the index page 328 may further include a physical offset 334 of the data payload 326 corresponding to that entry. Alternatively or additionally, the entries of the index page 328 may include a size 336 of the data payload 326 corresponding to the entry. In this manner, the offset to the start of a data payload 326 for an entry may be obtained by adding up the sizes 336 of previously written entries in the index pages 328.
The metadata offset 322 may point to the last index page 328 (furthest from right in illustrated example) and may further point to the first open entry in the last index page 328. In this manner, for each write request, the metadata entry for that request may be written to the first open position in the last index page 328. If all of the index pages 328 are full, a new index page 328 may be created and stored at the first open position from the second end and the metadata for the write request may be added at the first open position in that index page 328.
The storage node 106 may further store and maintain a block map 338. A block map 338 may be maintained for each logical volume and/or for each slice offset of each logical volume, e.g. for each local slice ID 310 which is mapped to a slice offset and logical volume by slice map 308. The entries of the block map 338 map include entries corresponding to each LBA 332 within the logical volume or slice of the logical volume. The entries may include the LBA 332 itself or may be stored at a location within the block map corresponding to an LBA 332.
The entry for each LBA 332 may include the PSID 316 identifying the physical segment 324 to which a write request referencing that LBA was last written. In some embodiments, the entry for each LBA 332 may further indicate the physical offset 334 within that physical segment 324 to which the data for that LBA was written. Alternatively, the physical offset 324 may be obtained from the index pages 328 of that physical segment. As data is written to an LBA 332, the entry for that LBA 332 may be overwritten to indicate the physical segment 324 and physical offset 334 within that segment 324 to which the most recent data was written.
In embodiments implementing multiple snapshots for a volume and slice of a volume, the segment map 314 may additionally include a snapshot ID 340 identifying the snapshot to which the PSID 316 has been assigned. In particular, each time a segment is allocated to a volume and slice of a volume, the current snapshot identifier for that volume and slice of a volume will be included as the snapshot ID 340 for that PSID 316.
In response to an instruction to create a new snapshot for a volume and slice of a volume, the storage node 106 will store the new current snapshot identifier, e.g. increment the previously stored current snapshot ID 340, and subsequently allocated segments will include the current snapshot ID 340. PSIDs 316 that are not filled and are allocated to the previous snapshot ID 340 may no longer be written to. Instead, they may be finalized or subject to garbage collection (see
The method 400 includes receiving 402 a write request. The write request may include payload data, payload data size, and an LBA as well as fields such as a slice identifier, a volume identifier, and a snapshot identifier. Where a slice identifier is included, the LBA may be an offset within the slice, otherwise the LBA may be an address within the storage volume.
The method 400 may include evaluating 404 whether a PSID 316 is allocated to the snapshot referenced in the write request and whether the physical segment 324 corresponding to the PSID 316 (“the current segment”) has space for the payload data. In some embodiments, as write requests are performed with respect to a PSID 316, the amount of data written as data 326 and index pages 328 may be tracked, such as by way of the data offset 320 and metadata offset 322 pointers. Accordingly, if the amount of previously-written data 326 and the number of allocated index pages 328 plus the size of the payload data and its corresponding metadata entry exceeds the capacity of the current segment it may be determined to be full at step 404.
If the current segment is determined 404 to be full, the method 400 may include allocating 406 a new PSID 316 as the current PSID 316 and its corresponding physical segment 324 as the current segment for the snapshot referenced in the write request. In some embodiments, the status of PSIDs 316 of the physical storage devices 108 may be flagged in the segment map 314 as allocated or free as a result of allocation and garbage collection, which is discussed below. Accordingly, a free PSID 316 may be identified in the segment map 314 and flagged as allocated.
The segment map 314 may also be updated 408 to include a slice ID 310 and snapshot ID 340 mapping the current PSID 316 to the snapshot ID, volume ID 312, and offset 304 included in the write request. Upon allocation, the current PSID 316 may also be mapped to a VSID (virtual segment identifier) 318 that will be a number higher than previously VSIDs 318 such that the VSIDs increase monotonically, subject, of course, to the size limit of the field used to store the VSID 318. However, the size of the field may be sufficiently large that it is not limiting in most situations.
The method 400 may include writing 410 the payload data to the current segment. As described above, this may include writing 410 payload data 326 to the free location closest to the first end of the current segment.
The method 400 may further include writing 412 a metadata entry to the current segment. This may include writing the metadata entry (LBA, size) to the first free location closest to the second end of the current segment. Alternatively, this may include writing the metadata entry to the first free location in an index page 328 that has room for it or creating a new index page 328 located adjacent a previous index page 328. Steps 410, 412 may include updating one or more pointers or table that indicates an amount of space available in the physical segment, such as a pointer 320 to the first free address closest to the first end and a pointer 322 to the first free address closest to the second end, which may be the first free address before the last index page 328 and/or the first free address in the last index page. In particular, these pointers may be maintained as the data offset 320 and metadata offset in the segment map 314 for the current PSID 316.
The method 400 may further include updating 416 the block map 338 for the current snapshot. In particular, for each LBA 332 referenced in the write request, an entry in the block map 338 for that LBA 332 may be updated to reference the current PSID 316. A write request may write to a range of LBAs 332. Accordingly, the entry for each LBA 332 in that range may be updated to refer to the current PSID 316.
Updating the block map 338 may include evaluating 414 whether an entry for a given LBA 332 referenced in the write request already exists in the block map 338. If so, then that entry is overwritten 418 to refer to the current PSID 316. If not, an entry is updated 416 in the block map 318 that maps the LBA 332 to the current PSID 316. In this manner, the block map 338 only references LBAs 332 that are actually written to, which may be less than all of the LBAs 332 of a storage volume or slice. In other embodiments, the block map 338 is of fixed size and includes and entry for each LBA 332 regardless of whether it has been written to previously. The block map 338 may also be updated to include the physical offset 334 within the current segment to which the data 326 from the write request was written.
In some embodiments, the storage node 106 may execute multiple write requests in parallel for the same LBA 332. Accordingly, it is possible that a later write can complete first and update the block map 338 whereas a previous write request to the same LBA 332 completes later. The data of the previous write request is therefore stale and the block map 338 should not be updated.
Suppressing of updating the block map 338 may be achieved by using the VSIDs 318 and physical offset 334. When executing a write request for an LBA, the VSID 318 mapped to the segment 324 and the physical offset 334 to which the data is to be, or was, written may be compared to the VSID 318 and offset 334 corresponding to the entry in the block map 338 for the LBA 332. If the VSID 318 mapped in the segment map 314 to the PSID 316 in the entry of the block map 338 corresponding to the LBA 332, then the block map 338 will not be updated. Likewise, if the VSID 318 corresponding to the PSID 316 in the block map 338 is the same as the VSID 318 for the write request and the physical offset 334 in the block map 338 is higher than the offset 334 to which the data of the write request is to be or was written, the block map 338 will not be updated for the write request.
As a result of steps 414-418, the block map 338 only lists the PSID 316 where the valid data for a given LBA 332 is stored. Accordingly, only the index pages 328 of the physical segment 324 mapped to the PSID 316 listed in the block map 338 need be searched to find the data for a given LBA 332. In instances where the physical offset 334 is stored in the block map 338, no searching is required.
The method 500 may include allocating 502 a new PSID 316 and its corresponding physical segment 324 as the current PSID 316 and current segment for the storage volume, e.g., by including a slice ID 310 corresponding to a volume ID 312 and offset 304 included in the new snapshot instruction or the write request referencing the new snapshot ID 340. Allocating 502 a new segment may include updating 504 an entry in the segment map 314 that maps the current PSID 316 to the snapshot ID 340 and a slice ID 310 corresponding to a volume ID 312 and offset 304 included in the new snapshot instruction.
As noted above, when a PSID 316 is allocated, the VSID 318 for that PSID 316 may be a number higher than all VSIDs 318 previously assigned to that volume ID 312, and possibly to that slice ID 310 (where slices have separate series of VSIDs 318). The snapshot ID 340 of the new snapshot may be included in the new snapshot instruction or the storage node 106 may simply assign a new snapshot ID that is the previous snapshot ID 340 plus one.
The method 500 may further include finalizing 506 and performing garbage collection with respect to PSIDs 316 mapped to one or more previous snapshots IDs 340 for the volume ID 312 in the segment map 314, e.g., PSIDs 316 assigned to the snapshot ID 340 that was the current snapshot immediately before the new snapshot instruction was received.
Note that the block map 338 records the PSID 316 for the latest version of the data written to a given LBA 332. Accordingly, any references to that LBA 332 in the physical segment 324 of a PSID 316 mapped to a lower-numbered VSID 318 may be marked 604 as invalid. For the physical segment 324 of the PSID 316 in the block map 338 for a given LBA 332, the last metadata entry for that LBA 332 may be found and marked as valid, i.e. the last entry referencing the LBA 332 in the index page 328 that is the last index page 328 including a reference to the LBA 332. Any other references to the LBA 332 in the physical segment 324 may be marked 604 as invalid. Note that the physical offset 334 for the LBA 332 may be included in the block map 338, so all metadata entries not corresponding to that physical offset 334 may be marked as invalid.
The method 600 may then include processing 606 each segment ID S of the PSIDs 316 mapped to the subject snapshot according to steps 608-620. In some embodiments, the processing of step 606 may exclude a current PSID 316, i.e. the last PSID 316 assigned to the subject snapshot. As described below, garbage collection may include writing valid data from a segment to a new segment. Accordingly, step 606 may commence with the PSID 316 having the lowest-valued VSID 318 for the subject snapshot. As any segments 324 are filled according to the garbage collection process, they may also be evaluated to be finalized or subject to garbage collection as described below.
The method 600 may include evaluating 608 whether garbage collection is needed for the segment ID S. This may include comparing the amount of valid data in the physical segment 324 for the segment ID S to a threshold. For example, if only 40% of the data stored in the physical segment 324 for the segment ID S has been marked valid, then garbage collection may be determined to be necessary. Other thresholds may be used, such as value between 30% and 80%. In other embodiments, the amount of valid data is compared to the size of the physical segment 324, e.g., the segment ID S is determined to need garbage collection if the amount of valid data is less than X % of the size of the physical segment 324, where X is a value between 30 and 80, such as 40.
If garbage collection is determined 608 not to be needed, the method 600 may include finalizing 610 the segment ID S. Finalizing may include flagging the segment ID S in the segment map 314 as full and no longer available to be written to. This flag may be stored in another table that lists finalized PSIDs 316.
If garbage collection is determined 608 to be needed, then the method 600 may include writing 612 the valid data to a new segment. For example, if the valid data may be written to a current PSID 316, i.e. the most-recently allocated PSID 316 for the subject snapshot, until its corresponding physical segment 324 full. If there is no room in the physical segment 324 for the current PSID 316, step 612 may include assigning a new PSID 316 as the current PSID 316 for the subject snapshot. The valid data, or remaining valid data, may then be written to the physical segment 324 corresponding to the current PSID 316 for the subject snapshot.
Note that writing 612 the valid data to the new segment maybe processed in the same manner as for any other write request (see
After the valid data is written to a new segment, the method 600 may further include freeing 614 the PSID S in the segment map 314, e.g., marking the entry in segment map 314 corresponding to PSID S as free.
The process of garbage collection may be simplified for PSIDs 316 that are associated with the subject snapshot in the segment map 314 but are not listed in the block map 338 with respect to any LBA 332. The physical segments 324 of such PSIDs 316 do not store any valid data. Entries for such PSIDs 316 in the segment map 314 may therefore simply be deleted and marked as free in the segment map 314
The following steps of the method 700 may be initially executed using the snapshot ID 340 included in the read request as “the subject snapshot,” i.e., the snapshot that is currently being processed to search for requested data. The method 700 includes receiving 702 the read request by the storage node 106 and identifying 704 one or more PSIDs 316 in the segment map 314 assigned to the subject snapshot and searching 706 the metadata entries for these PSIDs 316 for references to the LBA 332 included in the read request.
The searching of step 706 may be performed in order of decreasing VSID 318, i.e. such that the metadata entries for the last allocated PSID 316 is searched first. In this manner, if reference to the LBA 332 is found, the metadata of any previously-allocated PSIDs 316 does not need to be searched.
Searching 706 the metadata for a PSID 316 may include searching one or more index pages 328 of the physical segment 324 corresponding to the PSID 316. As noted above, one or more index pages 328 are stored at the second end of the physical segment 324 and entries are added to the index pages 328 in the order they are received. Accordingly, the last-written metadata including the LBA 332 in the last index page 328 (furthest from the second end of the physical segment 324) in which the LBA 332 is found will correspond to the valid data for that LBA 332. To locate the data 326 corresponding to the last-written metadata for the LBA 332 in the physical segment 324, the sizes 336 for all previously-written metadata entries may be summed to find a start address in the physical segment 324 for the data 326. Alternatively, if the physical offset 334 is included, then the data 326 corresponding to the metadata may be located without summing the sizes 336.
If reference to the LBA 332 is found 708 in the physical segment 324 for any of the PSIDs 316 allocated to the subject snapshot, the data 326 corresponding to the last-written metadata entry including that LBA 332 in the physical segment 324 mapped to the PSID 316 having the highest VSID 318 of all PSIDs 316 in which the LBA is found will be returned 710 to the application that issued the read request.
If the LBA 332 is not found in the metadata entries for any of the PSIDs 316 mapped to subject snapshot, the method 700 may include evaluating 712 whether the subject snapshot is the earliest snapshot for the storage volume of the read request on the storage node 106. If so, then the data requested is not available to be read and the method 700 may include returning 714 a “data not found” message or otherwise indicating to the requesting application that the data is not available.
If an earlier snapshot than the subject snapshot is present for the storage volume on the storage node 106, e.g., there exists at least one PSID 316 mapped to a snapshot ID 340 that is lower than the snapshot ID 340 of the subject snapshot ID, then the immediately preceding snapshot ID 340 will be set 716 to be the subject snapshot and processing will continue at step 704, i.e. the PSIDs 316 mapped to the subject snapshot will be searched for the LBA 332 in the read request as described above.
The method 700 is particularly suited for reading data from snapshots other than the current snapshot that is currently being written to. In the case of a read request from the current snapshot, the block map 338 may map each LBA 332 to the PSID 316 in which the valid data for that LBA 332 is written. Accordingly, for such embodiments, step 704 may include retrieving the PSID 332 for the LBA 332 in the write request from the block map 338 and only searching 706 the metadata corresponding to that PSID 316. Where the block map 338 stores a physical offset 334, then the data is retrieved from that physical offset within the physical segment 316 of the PSID 336 mapped to the LBA 332 of the read request.
In some embodiments, the block map 332 may be generated for a snapshot other than the current snapshot in order to facilitate executing read requests, such as where a large number of read requests are anticipated in order to reduce latency. This may include searching the index pages 328 of the segments 324 allocated to the subject snapshot and its preceding snapshots to identify, for each LBA 332 to which data has been written, the PSID 316 having the highest VSID 318 of the PSIDs 316 having physical segments 324 storing data written to the each LBA 332. This PSID 316 may then be written to the block map 338 for the each LBA 332. Likewise, the physical offset 334 of the last-written data for that LBA 332 within the physical segment 324 for that PSID 316 may be identified as described above (e.g., as described above with respect to steps 704-716).
Referring to
The illustrated method 800 may be executed by the storage manager 102 and one or more storage nodes 106 in order to implement this functionality. The method 800 may include receiving 802 a clone instruction and executing the remaining steps of the method 800 in response to the clone instruction. The clone instruction may be received by the storage manager 102 from a user or be generated according to a script or other program executing on the storage manager 102 or a remote computing device in communication with the storage manager 102.
The method 800 may include recording 804 a clone branch in a snapshot tree. For example, referring to
In some embodiments, the clone instruction may specify which snapshot the clone snapshot is of. In other embodiments, the clone instruction may be inferred to be a snapshot of a current snapshot. In such embodiments, a new principal snapshot may be created and become the current snapshot. The previous snapshot will then be finalized and be subject to garbage collection as described above. The clone will then branch from the previous snapshot. In the illustrated example, if node S2 represented the current snapshot, then a new snapshot represented by node S3 would be created. The snapshot of node S2 would then be finalized and subject to garbage collection and clone snapshot represented by A1 would be created and node A1 would be added to the hierarchy as a descendent of node S2.
In some embodiments, the clone node A1, and possibly its descendants A2 to A4 (representing subsequent snapshots of the clone snapshot), may be distinguished from the nodes S1 to S5 representing principal snapshots, such as by means of a flag, a classification of the connection between the node A1 and node S2 that is its immediate ancestor, or by storing data defining node A1 in a separate data structure.
Following creation of a clone snapshot, other principal snapshots of the storage volume may be created and added to represented in the hierarchy by one or more nodes S2 to S5. A clone may be created of any of these snapshots and represented by additional clone nodes. In the illustrated example, node B1 represents a clone snapshot of the snapshot represented by node S4. Subsequent snapshots of the clone snapshot are represented by nodes B1 to B3.
Referring again to
In some instances, it may be desirable to store a clone snapshot on a different storage node 106 than the principal snapshots. Accordingly, the method 800 may include allocating 806 segments to the clone snapshot on the different storage node 106. This may be invoked by sending a new snapshot instruction referencing the clone snapshot (i.e., an identifier of the clone snapshot) to the different storage node 106 and instructing one or more compute nodes 110 to route IOPs for the clone snapshot to the different storage node 106.
The storage manager 102 may store in each node of the hierarchy, data identifying one or more storage nodes 106 that store data for the snapshot represented by that node of the hierarchy. For example, each node may store or have associated therewith one or more identifiers of storage nodes 106 that store a particular snapshot ID for a particular volume ID. The node may further map one or more slice IDs (e.g., slice offsets) of a storage volume to one storage nodes 106 storing data for that slice ID and the snapshots for that slice ID.
Referring to
The method 1000 includes receiving 1002, by the storage manager 102, an instruction to rollback a storage volume to a particular snapshot SN. The method 1000 may then include processing 1004 each snapshot that is a represented by a descendent node of the node representing snapshot SN in the snapshot hierarchy, i.e. snapshots SN+1 to SMAX, where SMAX is the last principal snapshot that is a descendent of snapshot SN (each “descendent snapshot”). For each descendent snapshot, processing 1004 may include evaluating 1006 whether the each descendent is an ancestor of a node representing a clone snapshot. If not, then the storage manager 102 may instruct all storage nodes 106 storing segments mapped to the descendent snapshot to free 1008 these segments, i.e. delete entries from the segment map referencing the descendent snapshot and marking corresponding PSIDs 316 as free in the segment map 314.
If the descendent snapshot is found 1006 to be an ancestor of a clone snapshot, then step 1008 is not performed and the snapshot and any segments allocated to it are retained.
However, since node S4 is an ancestor of clone node B1, it is not removed and segments corresponding to it are not freed on one or more storage nodes in response to the roll back instruction. Inasmuch as each snapshot contains only data written to the storage volume after it was created, previous snapshots may be required to recreate the storage volume. Accordingly, the snapshots of nodes S3 to S1 are needed to create the snapshot of the storage volume corresponding to node B1.
Subsequent principal snapshots of the storage volume will be added as descendants of the node to which the storage volume was rolled back. In the illustrated example, a new principal snapshot is represented by node S6 that is an immediate descendent of node S3. Node S4 is only present due to clone node B1 and therefore may itself be classified as a clone node in the hierarchy in response to the rollback instruction of step 1002.
Note that
Referring to
The method 1200 may be executed by a storage node 106 (“the current storage node”) with information retrieved from the storage manager 102 as noted below. The method 1200 may include receiving 1202 a read request, which may include such information as a snapshot ID, volume ID (and/or slice ID), LBA, and size (e.g. number of 4 KB blocks to read).
Note that the read request may be issued by an application executing on a compute node 110. The compute node 110 may determine which storage node 106 to transmit the read request using information from the storage manager 102. For example, the compute node 110 may transmit a request to obtain an identifier for the storage node 106 storing data for a particular slice and snapshot of a storage volume. The storage manager may then obtain an identifier and/or address for the storage node 106 storing that snapshot and slice of the storage volume from the hierarchical representation of the storage volume and return it to the requesting compute node 110. For example, the storage manager 102 may retrieve this information from the node in the hierarchy representing the snapshot included in the read request.
In response to the read request, the current storage node performs the algorithm illustrated by subsequent steps of the method 1200. In particular, the method 1200 may include identifying 1204 segments assigned to the snapshot ID of the read request in the segment (“the subject snapshot”).
The method 1200 may include searching 1206 the metadata of the segments identified in step 1204 for the LBA of the read request. If the LBA is found 1208, the data from the highest numbered segment having the LBA in its metadata is returned 1210, i.e. the data that corresponds to the last-written metadata entry including the LBA.
If the LBA is not found in any of the segments mapped to subject snapshot, then the method 1200 may include evaluating 1212 whether the subject snapshot is the earliest snapshot on the current storage node. If not, then steps processing continues at step 1204 with the previous snapshot set 1214 as the subject snapshot.
Steps 1204-1214 may be performed in the same manner as for steps 704-714 of the method 700, including the various modifications and variations described above with respect to the method 700.
In contrast to the method 700, if the LBA is not found in any of the segments corresponding to the subject snapshot for any of the snapshots evaluated, then the method 1200 may include requesting 1216 a location, e.g. storage node identifier, where an earlier snapshot for the volume ID or slice ID is stored. In response to this request, the storage manager 102 determines an identifier of a storage node 106 storing the snapshot corresponding to the immediate ancestor of the earliest snapshot stored on the current storage node in the hierarchy. The storage manager 102 may determine an identifier of the storage node 106 relating to the immediate-ancestor snapshot and that stores data for a slice ID and volume ID of the read request as recorded for the ancestor nearest ancestor node in the hierarchy of the node corresponding to the earliest snapshot stored on the current storage node.
If the current storage node is found 1218 to be the earliest snapshot for the storage volume ID and/or slice ID of the read request, then the data the storage manager 102 may report this fact to the storage node, which will then return 1220 a message indicating that the requested LBA is not available for reading, such as in the same manner as step 714 of the method 700.
If another storage node stores an earlier snapshot for the volume ID and/or slice ID of the read request, then the read request may be transmitted 1222 to this next storage node by either the current storage node or the storage manager 102. The processing may then continue at step 1202 with the next storage node as the current storage node. The read request transmitted at step 1222 may have a snapshot ID set to the latest snapshot ID for the storage volume ID and or slice ID of the original read request.
The method 1200 may be performed repeatedly across multiple storage nodes 106 until the earliest snapshot is encountered or the LBA of the read request is located.
Referring to
For example, a slice having a slice offset 1300 within a storage volume may be divided into segments. For example, a 1 GB slice may be divided into 32 GB segments, each segment beginning at a segment offset 1302, e.g. (N−1)*32 GB, where N is the position of the segment within the slice for slice offset 1300. A segment 1304 of storage (e.g. a physical segment 324) may be allocated to each segment offset 1302. The segment map 300 may map the PSID of the segment 1304 to some or all of the segment offset 1302 and the slice offset 1300 of the slice 1300 or some other identifier corresponding the volume identifier of the storage volume to which the slice belongs and the offset 1300 of the slice (see, e.g., discussion of
For a given address 1308 including an N GB offset, M MB offset, and a P KB offset, where N, M, and P are integers, the segment 1304 and an LBA 1306 within that segment may be readily identified. The N GB offset of the address corresponds to the slice offset 1300. The M MB offset may be resolved to the segment offset 1302: (M−M %32), where M is the modulus operator such that (A*32+B) %32=B.
The segment 1304 for the segment offset 1302 may then be obtained from the segment map 300. The offset within the segment may be obtained from the KB offset P. For example, for 4 KB logical blocks, the starting address for an LBA 1306 within the segment 1304 may be determined as P−P %4.
Note that in some embodiments, non-snapshot segments as described with respect to
Read and write operations are therefore performed in a straightforward manner: the segment 1304 and LBA offset are determined from the address of the operation and data is read from that offset for a read operation or written to that offset for write operations
The method 1400 may include receiving 1402 a request to convert a storage volume from a non-snapshot storage volume to a snapshot storage volume. For example, a user may input the request to the storage manager 102, which then instructs storage nodes 106 storing slices of the storage volume referenced in the request to execute the method 1400 with respect to those slices.
In response to the request, the storage manager 102 invokes creation 1404 of a new snapshot. This may include executing the method 200 of
In some embodiments, segments 1304 allocated or both allocated and written to while the storage volume was a non-snapshot volume as described with respect to
For example, as described above, subsequent segments will be assigned VSIDs according to a monotonically increasing segment and these VSIDs may be used when processing reads to snapshots (see, e.g.,
The method 1400 may include assigning 1406 VSIDs (i.e., sequence numbers) to the segments 1304 allocated to the segment offsets 1302 of a slice offset 1300. In some embodiments, these VSIDs are assigned according to a mathematical relationship, i.e., the Nth 32 MB segment is assigned N as its VSID. Accordingly, VSIDs are assigned according to location of a segment within the slice not according to order of allocation. Segments allocated subsequent to creation of the snapshot are assigned VSIDs from the counter such that VSIDs indicate order of allocation.
The method 1400 may further include setting 1408 the VSID counter to a value corresponding to the size of the slice. For example, if a slice includes M segments, the VSID counter will be set to M, assuming that VSIDs are assigned starting at zero.
The method 1400 may further include creating 1410 a block map for the slice offset 1400. In particular, with reference to the block map 338, the LBA 332, PSID 316, and PO 334 for each LBA in a slice may be assigned according to mathematical relationships. For example, the segment map 300 maps a segment 1304 to a segment offset and that segment 1304 includes a range of LBAs, i.e. every 4 KB boundary for 4K LBAs is the offset for an LBA. Accordingly, the block map 338 may be create and written to such that the entry for a given LBA 332 includes the PSID of the segment 1304 including that LBA 332 and the physical offset 334 of that LBA, i.e. the 4 KB boundary at which that LBA 332 is written for 4 KB blocks. For example, an LBA that has an address at N GB, M MB, and P KB within the storage volume, the PO 334 will be P, or P %4 where the PO 334 is indicated in blocks and blocks are 4 KB in size.
When writes are received 1412 subsequent to creation 1404 of the new snapshot, the data from the writes is written 1414 to segments allocated to the new snapshot in the manner described above with respect to
Processing of a write may also include updating 1418 the block map 338. In particular, for each LBA referenced by a write operation, the block map 338 may be updated such that the entry for that LBA references the PSID of the segment to which the data for that LBA was written and the physical offset within that segment at which the data was written.
In the event that the original snapshot, i.e. the data written prior to conversion of the storage volume from a non-snapshot volume to a snapshot volume, is deleted 1420, the method 1400 may include deleting 1422, i.e. marking as free in the segment map, those segments allocated to the original snapshot in the segment map and that are not referenced in the block map due to all LBAs in it having been overwritten.
Note that for read commands for LBAs that are not written to in the new snapshot, the original snapshot may be evaluated to see if there is data for that LBA. A read may be performed in the same manner as described with respect to
The method 1500 may include receiving 1502 a request to backup a slice of the non-snapshot storage volume, the request indicating a backup target on which the backup copy is to be created, which may be another storage node 106, a cloud storage platform, or some other storage system.
In response, the storage node 106 commences copying 1504 the segments of the slice to the backup target. The method 1500 may include evaluating 1506, 1508 whether a write request referencing the slice is received prior to completion of copying of the slice to the backup target.
If so, the method 1500 may include creating a copy of a target segment of the write request. In particular, as described above with respect to
The header 1310 of the original physical segment may also be updated 1512 to reference the copy segment. In some embodiments, the segment map 300 is updated to map the copy segment to the slice and the block map 338 may be updated to change references to the PSID of the original physical segment with the PSID of the copy segment.
The payload data from the write request may then be written 1514 to the copy segment, i.e. at the physical offset corresponding to the LBA referenced the write request. Note that if other write requests are received prior to completion of copying 1504 to the backup target and that are addressed to the segment offset corresponding to the copy segment and original physical segment, the data from these will be written 1514 to the copy segment and steps 1510 and 1512 may be omitted.
When copying of the slice to the backup target is found 1516 to have completed, the method 1500 may include evaluating 1518 whether any write requests for the slice were received during copying 1504. If not, the method 1500 ends. If so, the method 1500 may include releasing 1520 (marking as free in segment map 300) any original physical segments written to according to steps 1506-1514. If not already performed, the method 1500 may include updating 1522 the segment map such that it references the PSID of the copy segments. For example, an entry in the segment map mapping an original physical segment to a segment offset in the segment map 300 may be updated to replace the PSID of the original physical segment with the PSID of the corresponding copy segment that was a copy of that original physical segment according to step 1510. If not already done, references to the PSID of the original physical segment in the block map 338 may be replaced with references to the PSID of the corresponding copy segment. Accordingly, if there is a VSID in the entry it will now be associated with the PSID of the copy segment.
Note that in the event that the segment map 300 or block map 338 are lost, they may be recreated using the headers 1310 of the original physical segments, which will indicate the PSID of the corresponding copy segment.
Write request received after the copy to the backup target is complete may be executed 1524 with respect to whichever physical segment corresponds to the LBA (segment offset and physical offset within a segment) corresponding to the address of the write request, which may be a copy segment.
Copy segments modified while the copying of step 1504 is ongoing may also be copied to the backup target and replace the backup copy on the backup target corresponding to the same segment offset as the copy segments, i.e. backup copies of the original physical segment of which the copy segment was a copy.
Subsequent write requests may also be propagated to the backup target which may modify corresponding backup copies accordingly, i.e. write to the backup segment corresponding to the segment offset derived from the address referenced by the write request.
Computing device 1600 includes one or more processor(s) 1602, one or more memory device(s) 1604, one or more interface(s) 1606, one or more mass storage device(s) 1608, one or more Input/output (I/O) device(s) 1610, and a display device 1630 all of which are coupled to a bus 1612. Processor(s) 1602 include one or more processors or controllers that execute instructions stored in memory device(s) 1604 and/or mass storage device(s) 1608. Processor(s) 1602 may also include various types of computer-readable media, such as cache memory.
Memory device(s) 1604 include various computer-readable media, such as volatile memory (e.g., random access memory (RAM) 1614) and/or nonvolatile memory (e.g., read-only memory (ROM) 1616). Memory device(s) 1604 may also include rewritable ROM, such as Flash memory.
Mass storage device(s) 1608 include various computer readable media, such as magnetic tapes, magnetic disks, optical disks, solid-state memory (e.g., Flash memory), and so forth. As shown in
I/O device(s) 1610 include various devices that allow data and/or other information to be input to or retrieved from computing device 1600. Example I/O device(s) 1610 include cursor control devices, keyboards, keypads, microphones, monitors or other display devices, speakers, printers, network interface cards, modems, lenses, CCDs or other image capture devices, and the like.
Display device 1630 includes any type of device capable of displaying information to one or more users of computing device 1600. Examples of display device 1630 include a monitor, display terminal, video projection device, and the like.
Interface(s) 1606 include various interfaces that allow computing device 1600 to interact with other systems, devices, or computing environments. Example interface(s) 1606 include any number of different network interfaces 1620, such as interfaces to local area networks (LANs), wide area networks (WANs), wireless networks, and the Internet. Other interface(s) include user interface 1618 and peripheral device interface 1622. The interface(s) 1606 may also include one or more peripheral interfaces such as interfaces for printers, pointing devices (mice, track pad, etc.), keyboards, and the like.
Bus 1612 allows processor(s) 1602, memory device(s) 1604, interface(s) 1606, mass storage device(s) 1608, I/O device(s) 1610, and display device 1630 to communicate with one another, as well as other devices or components coupled to bus 1612. Bus 1612 represents one or more of several types of bus structures, such as a system bus, PCI bus, IEEE 1694 bus, USB bus, and so forth.
For purposes of illustration, programs and other executable program components are shown herein as discrete blocks, although it is understood that such programs and components may reside at various times in different storage components of computing device 1600, and are executed by processor(s) 1602. Alternatively, the systems and procedures described herein can be implemented in hardware, or a combination of hardware, software, and/or firmware. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein.
In the above disclosure, reference has been made to the accompanying drawings, which form a part hereof, and in which is shown by way of illustration specific implementations in which the disclosure may be practiced. It is understood that other implementations may be utilized and structural changes may be made without departing from the scope of the present disclosure. References in the specification to “one embodiment,” “an embodiment,” “an example embodiment,” etc., indicate that the embodiment described may include a particular feature, structure, or characteristic, but every embodiment may not necessarily include the particular feature, structure, or characteristic. Moreover, such phrases are not necessarily referring to the same embodiment. Further, when a particular feature, structure, or characteristic is described in connection with an embodiment, it is submitted that it is within the knowledge of one skilled in the art to affect such feature, structure, or characteristic in connection with other embodiments whether or not explicitly described.
Implementations of the systems, devices, and methods disclosed herein may comprise or utilize a special purpose or general-purpose computer including computer hardware, such as, for example, one or more processors and system memory, as discussed herein. Implementations within the scope of the present disclosure may also include physical and other computer-readable media for carrying or storing computer-executable instructions and/or data structures. Such computer-readable media can be any available media that can be accessed by a general purpose or special purpose computer system. Computer-readable media that store computer-executable instructions are computer storage media (devices). Computer-readable media that carry computer-executable instructions are transmission media. Thus, by way of example, and not limitation, implementations of the disclosure can comprise at least two distinctly different kinds of computer-readable media: computer storage media (devices) and transmission media.
Computer storage media (devices) includes RAM, ROM, EEPROM, CD-ROM, solid state drives (“SSDs”) (e.g., based on RAM), Flash memory, phase-change memory (“PCM”), other types of memory, other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer.
An implementation of the devices, systems, and methods disclosed herein may communicate over a computer network. A “network” is defined as one or more data links that enable the transport of electronic data between computer systems and/or modules and/or other electronic devices. When information is transferred or provided over a network or another communications connection (either hardwired, wireless, or a combination of hardwired or wireless) to a computer, the computer properly views the connection as a transmission medium. Transmissions media can include a network and/or data links, which can be used to carry desired program code means in the form of computer-executable instructions or data structures and which can be accessed by a general purpose or special purpose computer. Combinations of the above should also be included within the scope of computer-readable media.
Computer-executable instructions comprise, for example, instructions and data which, when executed at a processor, cause a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, or even source code. Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the described features or acts described above. Rather, the described features and acts are disclosed as example forms of implementing the claims.
Those skilled in the art will appreciate that the disclosure may be practiced in network computing environments with many types of computer system configurations, including, an in-dash vehicle computer, personal computers, desktop computers, laptop computers, message processors, hand-held devices, multi-processor systems, microprocessor-based or programmable consumer electronics, network PCs, minicomputers, mainframe computers, mobile telephones, PDAs, tablets, pagers, routers, switches, various storage devices, and the like. The disclosure may also be practiced in distributed system environments where local and remote computer systems, which are linked (either by hardwired data links, wireless data links, or by a combination of hardwired and wireless data links) through a network, both perform tasks. In a distributed system environment, program modules may be located in both local and remote memory storage devices.
Further, where appropriate, functions described herein can be performed in one or more of: hardware, software, firmware, digital components, or analog components. For example, one or more application specific integrated circuits (ASICs) can be programmed to carry out one or more of the systems and procedures described herein. Certain terms are used throughout the description and claims to refer to particular system components. As one skilled in the art will appreciate, components may be referred to by different names. This document does not intend to distinguish between components that differ in name, but not function.
It should be noted that the sensor embodiments discussed above may comprise computer hardware, software, firmware, or any combination thereof to perform at least a portion of their functions. For example, a sensor may include computer code configured to be executed in one or more processors, and may include hardware logic/electrical circuitry controlled by the computer code. These example devices are provided herein purposes of illustration, and are not intended to be limiting. Embodiments of the present disclosure may be implemented in further types of devices, as would be known to persons skilled in the relevant art(s).
At least some embodiments of the disclosure have been directed to computer program products comprising such logic (e.g., in the form of software) stored on any computer useable medium. Such software, when executed in one or more data processing devices, causes a device to operate as described herein.
While various embodiments of the present disclosure have been described above, it should be understood that they have been presented by way of example only, and not limitation. It will be apparent to persons skilled in the relevant art that various changes in form and detail can be made therein without departing from the spirit and scope of the disclosure. Thus, the breadth and scope of the present disclosure should not be limited by any of the above-described exemplary embodiments, but should be defined only in accordance with the following claims and their equivalents. The foregoing description has been presented for the purposes of illustration and description. It is not intended to be exhaustive or to limit the disclosure to the precise form disclosed. Many modifications and variations are possible in light of the above teaching. Further, it should be noted that any or all of the aforementioned alternate implementations may be used in any combination desired to form additional hybrid implementations of the disclosure.
Number | Name | Date | Kind |
---|---|---|---|
3715573 | Vogelsberg | Feb 1973 | A |
4310883 | Clifton | Jan 1982 | A |
5602993 | Stromberg | Feb 1997 | A |
5680513 | Hyland | Oct 1997 | A |
5796290 | Takahashi | Aug 1998 | A |
6014669 | Slaughter | Jan 2000 | A |
6052797 | Ofek | Apr 2000 | A |
6119214 | Dirks | Sep 2000 | A |
6157963 | Courtright, II | Dec 2000 | A |
6161191 | Slaughter | Dec 2000 | A |
6298478 | Nally | Oct 2001 | B1 |
6301707 | Carroll | Oct 2001 | B1 |
6311193 | Sekido | Oct 2001 | B1 |
6851034 | Challenger | Feb 2005 | B2 |
6886160 | Lee | Apr 2005 | B1 |
6895485 | Dekoning | May 2005 | B1 |
6957221 | Hart | Oct 2005 | B1 |
7096465 | Dardinski | Aug 2006 | B1 |
7111055 | Falkner | Sep 2006 | B2 |
7171659 | Becker | Jan 2007 | B2 |
7246351 | Bloch | Jul 2007 | B2 |
7305671 | Davidov | Dec 2007 | B2 |
7461374 | Balint | Dec 2008 | B1 |
7467268 | Lindemann | Dec 2008 | B2 |
7535854 | Luo | May 2009 | B2 |
7590620 | Pike | Sep 2009 | B1 |
7698698 | Skan | Apr 2010 | B2 |
7721283 | Kovachka | May 2010 | B2 |
7734859 | Daniel | Jun 2010 | B2 |
7738457 | Nordmark | Jun 2010 | B2 |
7779091 | Wilkinson | Aug 2010 | B2 |
7797693 | Gustafson | Sep 2010 | B1 |
7984485 | Rao | Jul 2011 | B1 |
8037471 | Keller | Oct 2011 | B2 |
8046450 | Schloss | Oct 2011 | B1 |
8060522 | Birdwell | Nov 2011 | B2 |
8121874 | Guheen | Feb 2012 | B1 |
8171141 | Offer | May 2012 | B1 |
8219821 | Zimmels | Jul 2012 | B2 |
8250033 | De Souter | Aug 2012 | B1 |
8261295 | Risbood | Sep 2012 | B1 |
8326883 | Pizzorni | Dec 2012 | B2 |
8392498 | Berg | Mar 2013 | B2 |
8429346 | Chen | Apr 2013 | B1 |
8464241 | Hayton | Jun 2013 | B2 |
8505003 | Bowen | Aug 2013 | B2 |
8527544 | Colgrove | Sep 2013 | B1 |
8589447 | Grunwald et al. | Nov 2013 | B1 |
8601467 | Hofhansl | Dec 2013 | B2 |
8620973 | Veeraswamy | Dec 2013 | B1 |
8666933 | Pizzorni | Mar 2014 | B2 |
8745003 | Patterson | Jun 2014 | B1 |
8775751 | Pendharkar | Jul 2014 | B1 |
8782632 | Chigurapati | Jul 2014 | B1 |
8788634 | Krig | Jul 2014 | B2 |
8832324 | Hodges | Sep 2014 | B1 |
8886806 | Tung | Nov 2014 | B2 |
8909885 | Corbett | Dec 2014 | B2 |
8954383 | Vempati | Feb 2015 | B1 |
8954568 | Krishnan | Feb 2015 | B2 |
8966198 | Harris | Feb 2015 | B1 |
9009542 | Marr | Apr 2015 | B1 |
9134992 | Wong | Sep 2015 | B2 |
9146769 | Shankar | Sep 2015 | B1 |
9148465 | Gambardella | Sep 2015 | B2 |
9152337 | Kono | Oct 2015 | B2 |
9167028 | Bansal | Oct 2015 | B1 |
9280591 | Kharatishvili | Mar 2016 | B1 |
9330155 | Bono | May 2016 | B1 |
9336060 | Nori | May 2016 | B2 |
9342444 | Minckler | May 2016 | B2 |
9367301 | Serrano | Jun 2016 | B1 |
9390128 | Seetala | Jul 2016 | B1 |
9436693 | Lockhart | Sep 2016 | B1 |
9514160 | Song | Dec 2016 | B2 |
9521198 | Agarwala | Dec 2016 | B1 |
9569274 | Tarta | Feb 2017 | B2 |
9569480 | Provencher | Feb 2017 | B2 |
9590872 | Jagtap | Mar 2017 | B1 |
9600193 | Ahrens | Mar 2017 | B2 |
9613119 | Aron | Apr 2017 | B1 |
9619389 | Roug | Apr 2017 | B1 |
9635132 | Lin | Apr 2017 | B1 |
9667470 | Prathipati | May 2017 | B2 |
9733992 | Poeluev | Aug 2017 | B1 |
9747096 | Searle | Aug 2017 | B2 |
9870366 | Duan | Jan 2018 | B1 |
9880933 | Gupta | Jan 2018 | B1 |
9892265 | Tripathy | Feb 2018 | B1 |
9929916 | Subramanian | Mar 2018 | B1 |
9998955 | MacCarthaigh | Jun 2018 | B1 |
10019459 | Agarwala | Jul 2018 | B1 |
10042628 | Thompson | Aug 2018 | B2 |
10061520 | Zhao | Aug 2018 | B1 |
10133619 | Nagpal | Nov 2018 | B1 |
10169169 | Shaikh | Jan 2019 | B1 |
10191778 | Yang | Jan 2019 | B1 |
10241774 | Spivak | Mar 2019 | B2 |
10282229 | Wagner | May 2019 | B2 |
10339112 | Ranade | Jul 2019 | B1 |
10353634 | Greenwood | Jul 2019 | B1 |
10430434 | Sun | Oct 2019 | B2 |
10657119 | Acheson | May 2020 | B1 |
10956246 | Bagde | Mar 2021 | B1 |
20020141390 | Fangman | Oct 2002 | A1 |
20040010716 | Childress | Jan 2004 | A1 |
20040153703 | Vigue | Aug 2004 | A1 |
20040221125 | Ananthanarayanan | Nov 2004 | A1 |
20050065986 | Bixby | Mar 2005 | A1 |
20050216895 | Tran | Sep 2005 | A1 |
20050256948 | Hu | Nov 2005 | A1 |
20060025908 | Rachlin | Feb 2006 | A1 |
20060053357 | Rajski | Mar 2006 | A1 |
20060085674 | Ananthamurthy | Apr 2006 | A1 |
20060259686 | Sonobe | Nov 2006 | A1 |
20070006015 | Rao | Jan 2007 | A1 |
20070016786 | Waltermann | Jan 2007 | A1 |
20070067583 | Zohar | Mar 2007 | A1 |
20070165625 | Eisner | Jul 2007 | A1 |
20070260842 | Faibish | Nov 2007 | A1 |
20070277056 | Varadarajan | Nov 2007 | A1 |
20070288791 | Allen | Dec 2007 | A1 |
20080010421 | Chen | Jan 2008 | A1 |
20080068899 | Ogihara | Mar 2008 | A1 |
20080189468 | Schmidt | Aug 2008 | A1 |
20080235544 | Lai | Sep 2008 | A1 |
20080256141 | Wayda | Oct 2008 | A1 |
20080256143 | Reddy | Oct 2008 | A1 |
20080256167 | Branson | Oct 2008 | A1 |
20080263400 | Waters | Oct 2008 | A1 |
20080270592 | Choudhary | Oct 2008 | A1 |
20090144497 | Withers | Jun 2009 | A1 |
20090172335 | Kulkarni | Jul 2009 | A1 |
20090240809 | La Frese | Sep 2009 | A1 |
20090254701 | Kurokawa | Oct 2009 | A1 |
20090307249 | Koifman | Dec 2009 | A1 |
20100100251 | Chao | Apr 2010 | A1 |
20100161941 | Vyshetsky | Jun 2010 | A1 |
20100162233 | Ku | Jun 2010 | A1 |
20100211815 | Mankovskii | Aug 2010 | A1 |
20100274984 | Inomata | Oct 2010 | A1 |
20100299309 | Maki | Nov 2010 | A1 |
20100306495 | Kumano | Dec 2010 | A1 |
20100332730 | Royer | Dec 2010 | A1 |
20110083126 | Bhakta | Apr 2011 | A1 |
20110119664 | Kimura | May 2011 | A1 |
20110161291 | Taleck | Jun 2011 | A1 |
20110188506 | Arribas | Aug 2011 | A1 |
20110208928 | Chandra | Aug 2011 | A1 |
20110239227 | Schaefer | Sep 2011 | A1 |
20110246420 | Wang | Oct 2011 | A1 |
20110276951 | Jain | Nov 2011 | A1 |
20120005557 | Mardiks | Jan 2012 | A1 |
20120016845 | Bates | Jan 2012 | A1 |
20120066449 | Colgrove | Mar 2012 | A1 |
20120102369 | Hiltunen | Apr 2012 | A1 |
20120137059 | Yang | May 2012 | A1 |
20120216052 | Dunn | Aug 2012 | A1 |
20120226667 | Volvovski | Sep 2012 | A1 |
20120240012 | Weathers | Sep 2012 | A1 |
20120259819 | Patwardhan | Oct 2012 | A1 |
20120265976 | Spiers | Oct 2012 | A1 |
20120303348 | Lu | Nov 2012 | A1 |
20120311671 | Wood | Dec 2012 | A1 |
20120331113 | Jain | Dec 2012 | A1 |
20130054552 | Hawkins | Feb 2013 | A1 |
20130054932 | Acharya | Feb 2013 | A1 |
20130080723 | Sawa | Mar 2013 | A1 |
20130254521 | Bealkowski | Sep 2013 | A1 |
20130282662 | Kumarasamy | Oct 2013 | A1 |
20130332688 | Corbett | Dec 2013 | A1 |
20130339659 | Bybell | Dec 2013 | A1 |
20130346618 | Holkkola | Dec 2013 | A1 |
20130346709 | Wang | Dec 2013 | A1 |
20140006465 | Davis | Jan 2014 | A1 |
20140047263 | Coatney | Feb 2014 | A1 |
20140047341 | Breternitz | Feb 2014 | A1 |
20140047342 | Breternitz | Feb 2014 | A1 |
20140058871 | Marr | Feb 2014 | A1 |
20140059527 | Gagliardi | Feb 2014 | A1 |
20140059528 | Gagliardi | Feb 2014 | A1 |
20140089265 | Talagala | Mar 2014 | A1 |
20140108483 | Tarta | Apr 2014 | A1 |
20140130040 | Lemanski | May 2014 | A1 |
20140149696 | Frenkel | May 2014 | A1 |
20140195847 | Webman | Jul 2014 | A1 |
20140245319 | Fellows | Aug 2014 | A1 |
20140281449 | Christopher | Sep 2014 | A1 |
20140282596 | Bourbonnais | Sep 2014 | A1 |
20150046644 | Karp | Feb 2015 | A1 |
20150067031 | Acharya | Mar 2015 | A1 |
20150074358 | Flinsbaugh | Mar 2015 | A1 |
20150112951 | Narayanamurthy et al. | Apr 2015 | A1 |
20150134857 | Hahn | May 2015 | A1 |
20150149605 | de la Iglesia | May 2015 | A1 |
20150186217 | Eslami Sarab | Jul 2015 | A1 |
20150278333 | Hirose | Oct 2015 | A1 |
20150317212 | Lee | Nov 2015 | A1 |
20150319160 | Ferguson | Nov 2015 | A1 |
20150326481 | Rector | Nov 2015 | A1 |
20150379287 | Mathur | Dec 2015 | A1 |
20160011816 | Aizman | Jan 2016 | A1 |
20160026667 | Mukherjee | Jan 2016 | A1 |
20160042005 | Liu | Feb 2016 | A1 |
20160124775 | Ashtiani | May 2016 | A1 |
20160197995 | Lu | Jul 2016 | A1 |
20160239412 | Wada | Aug 2016 | A1 |
20160259597 | Worley | Sep 2016 | A1 |
20160283261 | Nakatsu | Sep 2016 | A1 |
20160357456 | Iwasaki | Dec 2016 | A1 |
20160357548 | Stanton | Dec 2016 | A1 |
20160373327 | Degioanni | Dec 2016 | A1 |
20170034023 | Nickolov | Feb 2017 | A1 |
20170060710 | Ramani | Mar 2017 | A1 |
20170060975 | Akyureklier | Mar 2017 | A1 |
20170139645 | Byun | May 2017 | A1 |
20170149843 | Amulothu | May 2017 | A1 |
20170168903 | Dornemann | Jun 2017 | A1 |
20170192889 | Sato | Jul 2017 | A1 |
20170214550 | Kumar | Jul 2017 | A1 |
20170235649 | Shah | Aug 2017 | A1 |
20170242617 | Walsh | Aug 2017 | A1 |
20170242719 | Tsirkin | Aug 2017 | A1 |
20170244557 | Riel | Aug 2017 | A1 |
20170244787 | Rangasamy | Aug 2017 | A1 |
20170293450 | Battaje | Oct 2017 | A1 |
20170322954 | Horowitz | Nov 2017 | A1 |
20170337492 | Chen | Nov 2017 | A1 |
20170371551 | Sachdev | Dec 2017 | A1 |
20180006896 | MacNamara | Jan 2018 | A1 |
20180024889 | Verma | Jan 2018 | A1 |
20180046553 | Okamoto | Feb 2018 | A1 |
20180082053 | Brown | Mar 2018 | A1 |
20180107419 | Sachdev | Apr 2018 | A1 |
20180113625 | Sancheti | Apr 2018 | A1 |
20180113770 | Hasanov | Apr 2018 | A1 |
20180136931 | Hendrich | May 2018 | A1 |
20180137306 | Brady | May 2018 | A1 |
20180150306 | Govindaraju | May 2018 | A1 |
20180159745 | Byers | Jun 2018 | A1 |
20180165170 | Hegdal | Jun 2018 | A1 |
20180218000 | Setty | Aug 2018 | A1 |
20180225140 | Titus | Aug 2018 | A1 |
20180225216 | Filippo | Aug 2018 | A1 |
20180246670 | Baptist | Aug 2018 | A1 |
20180246745 | Aronovich | Aug 2018 | A1 |
20180247064 | Aronovich | Aug 2018 | A1 |
20180276215 | Chiba | Sep 2018 | A1 |
20180285164 | Hu | Oct 2018 | A1 |
20180285223 | McBride | Oct 2018 | A1 |
20180285353 | Ramohalli | Oct 2018 | A1 |
20180287883 | Joshi | Oct 2018 | A1 |
20180302335 | Gao | Oct 2018 | A1 |
20180329981 | Gupte | Nov 2018 | A1 |
20180364917 | Ki | Dec 2018 | A1 |
20180375728 | Gangil | Dec 2018 | A1 |
20190004704 | Rathi | Jan 2019 | A1 |
20190065061 | Kim | Feb 2019 | A1 |
20190065323 | Dhamdhere | Feb 2019 | A1 |
20190073132 | Zhou | Mar 2019 | A1 |
20190073372 | Venkatesan | Mar 2019 | A1 |
20190079928 | Kumar | Mar 2019 | A1 |
20190089651 | Pignataro et al. | Mar 2019 | A1 |
20190102226 | Caldato | Apr 2019 | A1 |
20190109756 | Abulebdeh | Apr 2019 | A1 |
20190116690 | Chen | Apr 2019 | A1 |
20190148932 | Benesch | May 2019 | A1 |
20190156023 | Gerebe | May 2019 | A1 |
20190163460 | Kludy | May 2019 | A1 |
20190188094 | Ramamoorthi | Jun 2019 | A1 |
20190190803 | Joshi | Jun 2019 | A1 |
20190199601 | Lynar | Jun 2019 | A1 |
20190213085 | Alluboyina | Jul 2019 | A1 |
20190215313 | Doshi | Jul 2019 | A1 |
20190220266 | Doshi | Jul 2019 | A1 |
20190220315 | Vallala | Jul 2019 | A1 |
20190235895 | Ovesea | Aug 2019 | A1 |
20190250849 | Compton | Aug 2019 | A1 |
20190272205 | Jiang | Sep 2019 | A1 |
20190278624 | Bade | Sep 2019 | A1 |
20190324666 | Kusters | Oct 2019 | A1 |
20190334727 | Kaufman | Oct 2019 | A1 |
20190361748 | Walters | Nov 2019 | A1 |
20190369273 | Liu | Dec 2019 | A1 |
20190370018 | Kirkpatrick | Dec 2019 | A1 |
20200019414 | Byard | Jan 2020 | A1 |
20200026635 | Gaber | Jan 2020 | A1 |
20200034193 | Jayaram | Jan 2020 | A1 |
20200034254 | Natanzon | Jan 2020 | A1 |
20200065406 | Ippatapu | Feb 2020 | A1 |
20200073586 | Kurata | Mar 2020 | A1 |
20200083909 | Kusters | Mar 2020 | A1 |
20200150977 | Wang | May 2020 | A1 |
20200162330 | Vadapalli | May 2020 | A1 |
20200257519 | Shen | Aug 2020 | A1 |
20200310774 | Zhu | Oct 2020 | A1 |
20200310915 | Alluboyina | Oct 2020 | A1 |
20200356537 | Sun | Nov 2020 | A1 |
20200412625 | Bagarolo | Dec 2020 | A1 |
20210029000 | Mordani | Jan 2021 | A1 |
20210042151 | Muller | Feb 2021 | A1 |
20210064536 | Palmer | Mar 2021 | A1 |
20210067607 | Gardner | Mar 2021 | A1 |
20210126839 | Rudrachar | Apr 2021 | A1 |
20210141655 | Gamage | May 2021 | A1 |
20210157622 | Ananthapur | May 2021 | A1 |
20210168034 | Qian | Jun 2021 | A1 |
Number | Date | Country |
---|---|---|
WO2017008675 | Jan 2017 | WO |
Entry |
---|
Segment map, Feb. 4, 2019. |
Implementing time critical functionalities with a distributed adaptive container architecture, Stankovski, Nov. 2016. |
Segment map, Google, Feb. 4, 2019. |
Fast and Secure Append-Only storage with Infinite Capacity, Zheng, Aug. 27, 2003. |
User Mode and Kernel Mode, Microsoft, Apr. 19, 2017. |
Precise memory leak detection for java software using container profiling, Xu, Jul. 2013. |
Mogi et al., “Dynamic Parity Stripe Reorganizations for Raid5 Disk Arrays,” 1994, IEEE, pp. 17-26. |
Syed et al., “The Container Manager Pattern”, ACM, pp. 1-9 (Year 2017). |
Rehmann et al., “Performance of Containerized Database Management Systems”, ACM, pp. 1-6 (Year 2018). |
Awada et al., “Improving Resource Efficiency of Container-instance Clusters on Clouds”, IEEE, pp. 929-934 (Year 2017). |
Stankovski et al., “Implementing Time-Critical Functionalities with a Distributed Adaptive Container Architecture”, ACM, pp. 1-5 (Year 2016). |
Dhakate et al., “Distributed Cloud Monitoring Using Docker as Next Generation Container Virtualization Technology” IEEE, pp. 1-5 (Year 2015). |
Crameri et al., “Staged Deployment in Mirage, an Integrated Software Upgrade Testing and Distribution System”, ACM, pp. 221-236 (Year: 2007). |
Cosmo et al., “Packages Upgrades in FOSS Distributions: Details and Challenges”, AC 2008). |
Burg et al., “Atomic Upgrading of Distributed Systems”, ACM, pp. 1-5 (Year: 2008). |
Souer et al., “Component Based Architecture forWeb Content Management: Runtime Deployable Web Manager Component Bundles”, IEEE, pp. 366-369 (Year: 2008). |
Weingartner et al., “A distributed autonomic management framework for cloud computing orchestration.” In 2016 IEEE World Congress on Services (Year: 2016). |
Number | Date | Country | |
---|---|---|---|
20210073079 A1 | Mar 2021 | US |