The Non-Volatile Memory express (NVMe) Specification is a specification for accessing solid-state devices (SSDs) and other target devices attached through a Peripheral Component Interconnect Express (PCIe) bus. The NVMe SSD PCIe host interface defines a concept of Namespaces, which are analogous to logical volumes supported by SAS RAID (Redundant Array of Independent Disks) adapters. Copy on write functionality in an SSD may be implemented using namespaces. Namespaces are typically implemented as an abstraction above the global Logical Block Address (LBA) space tracked in an SSD's indirection system.
LBA metadata only indicates one host LBA and it does not include a reference count. Including or appending a reference count in the metadata would incur additional writes to rewrite the data with new metadata, which is a poor solution. Without such reference counts in the LBA metadata, there is not a mechanism for determining whether additional clone copies exist (e.g., that additional LBA's point to the same data). Managing multiple clone copies of data on an SSD therefore faces particular challenges with respect to garbage collection. For example, when a host modifies the ‘source’ LBA after a copy operation it may produce garbage collection challenges. The source copy may be effectively the ‘master’ copy that was written before the copy operation was performed to duplicate the data to one or more additional host LBAs. When this master LBA is modified, a non-copy-aware garbage collection algorithm may free the physical data at the next opportunity, since a method does not exist to efficiently modify that data's metadata to indicate that more host LBAs point to that data.
Techniques for improved copy on write functionality within an SSD are disclosed. In some embodiments, the techniques may be realized as a method for providing improved copy on write functionality within an SSD including providing, in memory of a PCIe device, an indirection data structure. The data structure may include a master entry for original or source copy of the cloned data, the master entry having a reference to a master index and a reference to a next index, a clone entry for the cloned data, the cloned entry having a reference to the master index and a reference to a next index. The techniques may include traversing, using a computer processor, one or more copies of the cloned data using one or more of the references.
In accordance with additional aspects of this exemplary embodiment, the host device may include at least one of: an enterprise server, a database server, a workstation, and a computer.
In accordance with additional aspects of this exemplary embodiment, the indirection data structure may include a plurality of physical addresses.
In accordance with further aspects of this exemplary embodiment, the indirection data structure may be part of a circularly linked list, wherein the master entry for cloned data comprises a reference to a master index and a reference to a next index.
In accordance with other aspects of this exemplary embodiment, the indirection data structure may be part of a circularly linked list, wherein the clone entry for the cloned data comprises a reference to the master index and a reference to a next index.
In accordance with additional aspects of this exemplary embodiment, the indirection data structure may be part of a single-ended linked list, wherein an entry in an index provides an indication that the index is a master index.
In accordance with further aspects of this exemplary embodiment, the references may include entries in a flat indirection table for logical block addressing.
In accordance with other aspects of this exemplary embodiment, the references may include entries in a tree data structure for logical block addressing.
In accordance with additional aspects of this exemplary embodiment, the improved copy on write functionality may include an improved namespace copy functionality.
In accordance with further aspects of this exemplary embodiment, the techniques may include setting an indicator for one or more packed logical blocks to indicate that the one or more packed logical blocks are cloned.
In accordance with other aspects of this exemplary embodiment, a master index of the master entry may point to the master entry.
In accordance with additional aspects of this exemplary embodiment, the master index of the cloned entry may point to the master entry.
In accordance with further aspects of this exemplary embodiment, the next index of a last cloned entry in a data structure may point to the master entry.
In accordance with other aspects of this exemplary embodiment, the techniques may include determining that the clone entry for the cloned data is an only clone entry, wherein the determination comprises determining that the next index of the cloned entry matches the master index of the cloned entry, determining that the next index of the master entry points to the clone entry, uncloning the clone entry of the cloned data by setting the next index of the clone entry to a indirection entry indicating a packed logical block and setting the master index entry to a indirection entry indicating a packed logical block, and uncloning the master entry of the cloned data by setting the next index of the master entry to a first indirection entry indicating a first packed logical block of an original master entry and setting the master index of the master entry to second indirection entry indicating second packed logical block of the original master entry.
In accordance with additional aspects of this exemplary embodiment, the techniques may include determining that the clone entry for the cloned data is one of a plurality of clone entries, wherein the determination comprises determining at least one of: that the next index of the cloned entry does not match the master index of the cloned entry, and that the next index of the master entry does not point to the clone entry, and uncloning the clone entry of the cloned data by setting the next index of a prior entry to point to an entry indicated by the next index of the clone entry.
In accordance with further aspects of this exemplary embodiment, the techniques may include reviewing an entry during a garbage collection process, determining that the entry contains a cloned indicator, and determining that the entry in the garbage collection process is a valid entry not to be deleted based upon the determination that the entry contains the cloned indicator.
In other embodiments, the techniques may be realized as a computer program product comprised of a series of instructions executable on a computer. The computer program product may perform a process for providing improved copy on write functionality within an SSD. The computer program may implement the steps of providing, in memory of a device, an indirection data structure comprising a master entry for cloned data, the master entry having a reference to one or more indexes, a clone entry for the cloned data, the cloned entry having at least one of: a reference to a master index, a reference to a next index, and a value indicating an end of a data structure, and traversing, using a computer processor, one or more copies of the cloned data using one or more of the references.
In yet other embodiments, the techniques may be realized as a system for providing improved copy on write functionality within an SSD. The system may include a first device, wherein the first device includes stored instructions stored in memory. The instructions may include an instruction to provide, in memory of the first device, an indirection data structure comprising a master entry for cloned data, the master entry having a reference to one or more indexes, a clone entry for the cloned data, the cloned entry having at least one of: a reference to a master index, a reference to a next index, and a value indicating an end of a data structure, and traversing, using a computer processor, one or more copies of the cloned data using one or more of the references.
In accordance with additional aspects of this exemplary embodiment, the indirection data structure may include a plurality of physical addresses.
In accordance with further aspects of this exemplary embodiment, the indirection data structure may be part of a circularly linked list, wherein the master entry for cloned data comprises a reference to a master index and a reference to a next index.
In accordance with other aspects of this exemplary embodiment, the indirection data structure may be part of a circularly linked list, wherein the clone entry for the cloned data comprises a reference to the master index and a reference to a next index.
In accordance with additional aspects of this exemplary embodiment, the indirection data structure may be part of a single-ended linked list, wherein an entry in an index provides an indication that the index is a master index.
In accordance with further aspects of this exemplary embodiment, the references may include entries in a flat indirection table for logical block addressing.
In accordance with other aspects of this exemplary embodiment, the first device may include a Peripheral Component Interconnect Express (PCIe) device.
In accordance with additional aspects of this exemplary embodiment, the techniques may further include an instruction to set an indicator for one or more packed logical blocks to indicate that the one or more packed logical blocks are cloned.
In accordance with further aspects of this exemplary embodiment, the master index of the master entry and the master index of the cloned entry may point to the master entry and the next index of a last cloned entry in a data structure points to the master entry.
In accordance with additional aspects of this exemplary embodiment, the target device (e.g., a PCIe device) may include at least one of: a graphics processing unit, an audio/video capture card, a hard disk, a host bus adapter, and a Non-Volatile Memory express (NVMe) controller. According to some embodiments, the target device may be an NVMe compliant device.
The present disclosure will now be described in more detail with reference to exemplary embodiments thereof as shown in the accompanying drawings. While the present disclosure is described below with reference to exemplary embodiments, it should be understood that the present disclosure is not limited thereto. Those of ordinary skill in the art having access to the teachings herein will recognize additional implementations, modifications, and embodiments, as well as other fields of use, which are within the scope of the present disclosure as described herein, and with respect to which the present disclosure may be of significant utility.
In order to facilitate a fuller understanding of the present disclosure, reference is now made to the accompanying drawings, in which like elements are referenced with like numerals. These drawings should not be construed as limiting the present disclosure, but are intended to be exemplary only.
The present disclosure relates to improved copy on write functionality. In some embodiments, this copy on write functionality may include namespace copies. The NVMe SSD PCIe host interface defines a concept of Namespaces, which are analogous to logical volumes supported by SAS RAID (Redundant Array of Independent Disks) adapters. A namespace may be dedicated to a Virtual Machine (VM). Within an SSD, Namespaces can be logically isolated from one another and can be securely erased and repurposed without affecting other Namespaces.
A namespace identifier may be included in a media access command issued by the host, along with the LBA within that namespace. The SSD may use a data structure (e.g., a table lookup, a tree, a hashmap, a bitmap, etc.) to translate that combination of namespace and LBA into a global LBA used internally to the SSD. According to some embodiments, references to an LBA may refer to this global LBA.
Embodiments of the present disclosure describe a system and method for implementing an efficient ‘Namespace Copy’ function that avoids duplicating the data on the SSD. This reduces the write amplification incurred within the SSD, which extends the life of the SSD while providing higher performance.
Namespace copies are a form of ‘copy on write’ functionality. On the copy function, a pointer is generated that points to the single copy on the media. A new copy on the media is generated and updated on a write. A Namespace copy function necessitates an efficient implementation of ‘copy on write’ on an SSD. Embodiments of the present disclosure can be applied to namespace copies. Embodiments of the present disclosure can also be applied to other ‘copy on write’ implementations for an SSD. For example, a “snapshot” copy may be used to create point-in-time images of a namespace and implementations of the present embodiment may be used to track snapshot copies.
Embodiments of the present disclosure, provide an SSD indirection system (e.g. flat LBA table) or method to include multiple entries that point to the same physical location. Such an implementation may enable efficient garbage collection when multiple references exist. Tracking multiple references or handling multiple pointers (e.g., to NAND flash data) may improve garbage collection. Garbage collection may be performed using metadata on the non-volatile storage (e.g., NAND Flash memory, NOR Flash memory, etc.) that includes the host LBA for the data. The garbage collection algorithm may determine which host sectors are still valid by looking those LBAs up in the indirection data structure (e.g., a table, a tree, a hashmap, a bitmap, etc.) to see if the data structure still points to the physical location. If not, the algorithm frees the block.
One or more embodiments described herein provide efficient representation of a duplicated indirection entry using a single flag and an alternate indirection entry format that tracks one or more duplicated host LBAs. One or more embodiments may use a flat indirection lookup data structure for tracking multiple logical block addresses pointing to a same physical address. Other embodiments may be implemented using a hashmap, a tree, or composition-based system for tracking duplicated LBAs.
Improved copy on write functionality within an SSD techniques are discussed in further detail below.
Turning now to the drawings,
Target 110 may contain NVMe controller 112 and non-volatile storage 114. Target 116 may contain NVMe controller 118 and non-volatile storage 120. Target 122 may contain NVMe controller 124 and non-volatile storage 126.
System memory 128 may contain memory based resources accessible to Host System 102 via a memory interface (e.g., double data rate type three synchronous dynamic random access memory (DDR3 SDRAM)). System memory 128 can take any suitable form, such as, but not limited to, a solid-state memory (e.g., flash memory, or solid state device (SSD)), optical memory, and magnetic memory. System memory 128 can be volatile or non-volatile memory. System memory 128 may contain one or more data structures.
According to some embodiments, interfaces standards other than PCIe may be used for one or more portions including, but not limited to, Serial Advanced Technology Attachment (SATA), Advanced Technology Attachment (ATA), Small Computer System Interface (SCSI), PCI-extended (PCI-X), Fibre Channel, Serial Attached SCSI (SAS), Secure Digital (SD), Embedded Multi-Media Card (EMMC), and Universal Flash Storage (UFS).
The host system 102 can take any suitable form, such as, but not limited to, an enterprise server, a database host, a workstation, a personal computer, a mobile phone, a game device, a personal digital assistant (PDA), an email/text messaging device, a digital camera, a digital media (e.g., MP3) player, a GPS navigation device, and a TV system.
The host system 102 and the target device can include additional components, which are not shown in
Referring to
The physical addresses of the individual host LBAs are distributed such that an additional lookup is required for some LBAs. This may make room for including the master and clone pointers in each master and clone entry. The number of DRAM accesses to fetch the physical addresses is not increased significantly, however. As illustrated in
In embodiments with a flat indirection data structure, a PLB (physical location block) refers to an individual entry containing a physical NAND memory address in a single lookup table. A typical PLB granularity dedicates 4 KB (e.g. 8*512 B sectors) to a single data structure entry. As an example, consider a clone chunk size of 8 PLBs—the average number of DRAM accesses required for purely random single sector accesses to a cloned range is 1.25. This number can be lower with a larger clone chunk size, with a trade-off being less granular clone boundaries and more NAND accesses required to ‘undone’ a chunk of PLBs.
If a host read PLB lookup points to a cloned entry, the SSD needs 1) the physical address and 2) the LBA that was used when the data was written. The physical address is distributed between the master and clone entries, optionally filling all available PLB entries with duplicate data to reduce the probability that the targeted LBA requires a second DRAM read. For #2, the master LBA can be calculated based on the master pointer in each clone entry—this does not require an additional DRAM access by design, and this master pointer (along with the next clone pointer) may be cached from the original PLB lookup.
In some embodiments, in order to fit additional information in an existing indirection data structure, cloning may be tracked only at a granularity that is some multiple of the PLB size. In some embodiments, the granularity may be chosen as a power-of-two multiple of the PLB size in order to enable efficient computation of the index of the C-Chunk corresponding to a given LBA. As an example, the multiplier may be 8, but larger cloning granularities may be used. Cloning may involve cloning large ranges of LBAs at a time (such as an entire namespace), so the penalty for using a larger granularity may be minimal.
In some embodiments, the indirection data structure may have embedded in it a circularly linked list of one or more C-Chunks that refer to the same data. In other embodiments, other forms of linked lists (e.g., a single-ended link list) may be used. The physical addresses describing the C-Chunk's data may be spread among indirection entries for that C-Chunk list.
A C-Chunk whose data resides physically on the media, tagged with the PLBs appropriate for that C-Chunk (i.e. the original copy) may be called the “Master C-Chunk”. Other C-Chunks that currently refer to the same data without storing a copy on the media may be called “Clone C-Chunks”.
The indirection data structure entries for all PLBs in a C-Chunk are grouped together to form a single “C-Entry”. For a Master C-Chunk, the C-Entry may be of the format illustrated in
For a Clone C-Chunk, the C-Entry may be of the format illustrated in
In one or more embodiments, NAND Addressi may be the NAND address for an ith PLB of the Master C-Chunk (with i=0 representing the first PLB). A Master Index may be an indirection data structure index of the Master C-Entry (divided by 8 since it consumes the space of 8 PLB entries). A Next Index may be an indirection data structure index for the next Clone C-Entry pointing to the same data (divided by 8 since it consumes the space of 8 PLB entries). If there are no more Clone C-Entries to represent, this Next Index may point back to the Master C-Entry. In some embodiments, this Next Index may point to a value indicating termination of the list (e.g., a null pointer).
In some embodiments, one or more tests may be used to determine whether a C-Chunk is a master entry. For example, a C-Chunk may be the master if and only if the Master Index of its C-Entry points to the C-Entry itself.
The relationship between a Master C-Entry and a Clone C-Entry via Master Indexes and Next Indexes may allow one or more of the following operations to be performed efficiently.
To clone a set of 8 consecutive 8-PLB-aligned PLBs one or more methods may be used. For example, in one or more embodiments, cloning a set of PLBs may include:
To clone a C-Chunk into a new C-Chunk one or more methods may be used. For example, in one or more embodiments, cloning a C-Chunk into a new C-Chunk may include:
To do a read lookup on a PLB whose indirection entry has Clone Tracking bit set, one or more methods may be used. For example, in one or more embodiments, performing a read lookup on a PLB with an indirection bit set may include:
To “unclone” a Clone C-Chunk one or more techniques may be used. For example, in some embodiments the techniques may include:
To “unclone” a Master C-Chunk one or more techniques may be used. For example, in some embodiments the techniques may include:
When an indirection update is required for a target PLB that has the Clone Tracking bit set, the techniques may include atomically:
Garbage Collection may be implemented such that the garbage collection algorithm never discards the data for any physical data whose PLB is marked with the Clone Tracking bit in the indirection system. That is, the garbage collection algorithm may consider all PLBs marked with a clone tracking bit to be “valid” data that requires relocation rather than erasure.
In one or more embodiments, a small counting bloom filter could be stored in SRAM to track C-Chunks present in the system. On a write, if the bloom filter may indicate there's no possibility of the PLB being part of a C-Chunk, then a direct update of the indirection system may safely be performed without reading the current data structure entry first. Because clones tend to be large sequential ranges, the hash function used for a bloom filter may be something like: f(C-Entry Index)=C-Entry Index/Filter Size rather than a random function.
Indirection creation module 412 may create one or more data structures for tracking copy on write copies. A PLB may simply be an entry in a table. An indirection data structure entry may be extended (e.g., by one bit) to facilitate copy-on-write. An extra bit may be a “clone tracking” bit which may be set to 1 to indicate that either there are other PLBs for which this PLB acts as the master copy, or this is a clone that has some other PLB as its master copy. The remaining bits of the indirection data structure entry with the clone tracking bit set may or may not contain a NAND address (e.g., like an entry without the bit set may). The alternate data structure for ‘clone tracking=1’ includes fields to create a linked list of cloned entries and a pointer to the master entry. Space for these additional fields is gained by using a single entry to describe a larger (e.g. 2×) chunk of LBAs than an uncloned indirection entry.
Indirection management module 414 may perform one or more operations using an indirection data structure. Indirection management module 414 may facilitate cloning of data, reading of cloned data, uncloning data, and facilitate safe and efficient garbage collection using one or more of the methods as discussed above in reference to
Error handling module 416 may trap, log, report, and/or handle one or more errors associated with managing cloned data.
At stage 504, a master c-entry may be created. A C-Chunk whose data resides physically on the media, tagged with the PLBs appropriate for that C-Chunk (i.e. the original copy) may be called the “Master C-Chunk”. Other C-Chunks that currently refer to the same data without storing a copy on the media may be called “Clone C-Chunks”.
The indirection data structure entries for all PLBs in a C-Chunk are grouped together to form a single “C-Entry”. For a Master C-Chunk, the C-Entry may be of the format illustrated in
At stage 506, a clone c-entry may be created. For a Clone C-Chunk, the C-Entry may be of the format illustrated in
In one or more embodiments, NAND Addressi may be the NAND address for an ith PLB of the Master C-Chunk (with i=0 representing the first PLB).
At stage 508, a block may be assigned to indicate a master index. A Master Index may be an indirection data structure index of the Master C-Entry (divided by 8 since it consumes the space of 8 PLB entries). Master Indexes of both a Master C-Entry and a Clone C-Entry may point to a Master C-Entry.
At stage 510, a block may be assigned to indicate a next index. A Next Index may be an indirection data structure index for the next Clone C-Entry pointing to the same data (divided by 8 since it consumes the space of 8 PLB entries). If there are no more Clone C-Entries to represent, this Next Index may point back to the Master C-Entry.
In some embodiments, one or more tests may be used to determine whether a C-Chunk is a master entry. For example, a C-Chunk may be the master if and only if the Master Index of its C-Entry points to the C-Entry itself.
At stage 512, the method 500 may end.
Other embodiments are within the scope and spirit of the invention. For example, the functionality described above can be implemented using software, hardware, firmware, hardwiring, or combinations of any of these. One or more computer processors operating in accordance with instructions may implement the functions associated with improved copy on write functionality within an SSD in accordance with the present disclosure as described above. If such is the case, it is within the scope of the present disclosure that such instructions may be stored on one or more non-transitory processor readable storage media (e.g., a magnetic disk or other storage medium). Additionally, modules implementing functions may also be physically located at various positions, including being distributed such that portions of functions are implemented at different physical locations.
The present disclosure is not to be limited in scope by the specific embodiments described herein. Indeed, other various embodiments of and modifications to the present disclosure, in addition to those described herein, will be apparent to those of ordinary skill in the art from the foregoing description and accompanying drawings. Thus, such other embodiments and modifications are intended to fall within the scope of the present disclosure. Further, although the present disclosure has been described herein in the context of a particular implementation in a particular environment for a particular purpose, those of ordinary skill in the art will recognize that its usefulness is not limited thereto and that the present disclosure may be beneficially implemented in any number of environments for any number of purposes. Accordingly, the claims set forth below should be construed in view of the full breadth and spirit of the present disclosure as described herein.
The present application is a divisional application of U.S. patent application Ser. No. 15/876,245, filed on Jan. 22, 2018, and now issued as U.S. Pat. No. 10,540,106, which application is a continuation of U.S. patent application Ser. No. 14/630,863, filed on Feb. 25, 2015, and now issued as U.S. Pat. No. 9,880,755. Each of the aforementioned applications are herein incorporated by reference in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
4742450 | Duvall et al. | May 1988 | A |
5481694 | Chao et al. | Jan 1996 | A |
5574905 | deCarmo | Nov 1996 | A |
5815649 | Utter et al. | Sep 1998 | A |
6219770 | Landau | Apr 2001 | B1 |
6751583 | Clarke et al. | Jun 2004 | B1 |
6779095 | Selkirk et al. | Aug 2004 | B2 |
7529897 | Waldspurger et al. | May 2009 | B1 |
7673185 | Kalwitz et al. | Mar 2010 | B2 |
7912995 | Long et al. | Mar 2011 | B1 |
7941692 | Royer et al. | May 2011 | B2 |
8190835 | Yueh | May 2012 | B1 |
8447943 | Kawaguchi | May 2013 | B2 |
8806156 | Yamamoto et al. | Aug 2014 | B2 |
8843666 | Besmer et al. | Sep 2014 | B2 |
8850145 | Haase et al. | Sep 2014 | B1 |
8862810 | Lee et al. | Oct 2014 | B2 |
8924751 | Myrah et al. | Dec 2014 | B2 |
8930307 | Colgrove et al. | Jan 2015 | B2 |
8959374 | Miller et al. | Feb 2015 | B2 |
9251066 | Colgrove et al. | Feb 2016 | B2 |
9880755 | Dewitt | Jan 2018 | B2 |
10540106 | Dewitt | Jan 2020 | B2 |
20020078078 | Oksanen | Jun 2002 | A1 |
20030018689 | Ramakrishnan | Jan 2003 | A1 |
20030159007 | Sawdon et al. | Aug 2003 | A1 |
20040040018 | Fleming et al. | Feb 2004 | A1 |
20070093124 | Varney et al. | Apr 2007 | A1 |
20070130228 | Breau et al. | Jun 2007 | A1 |
20070174369 | Detlefs | Jul 2007 | A1 |
20080244028 | Le | Oct 2008 | A1 |
20090063765 | Kottomtharayil et al. | Mar 2009 | A1 |
20090292705 | McKenney et al. | Nov 2009 | A1 |
20100023716 | Nemoto et al. | Jan 2010 | A1 |
20100023717 | Jinno et al. | Jan 2010 | A1 |
20100153620 | McKean et al. | Jun 2010 | A1 |
20100332846 | Bowden et al. | Dec 2010 | A1 |
20110161298 | Grobman et al. | Jun 2011 | A1 |
20110289255 | Wang | Nov 2011 | A1 |
20120311243 | Lin | Dec 2012 | A1 |
20130042049 | Fiske et al. | Feb 2013 | A1 |
20130067139 | Yamamoto et al. | Mar 2013 | A1 |
20130086006 | Colgrove et al. | Apr 2013 | A1 |
20130086308 | Nakata | Apr 2013 | A1 |
20130097399 | Chhaunker et al. | Apr 2013 | A1 |
20130159647 | Kabano et al. | Jun 2013 | A1 |
20130185532 | Flynn et al. | Jul 2013 | A1 |
20130227248 | Mehta et al. | Aug 2013 | A1 |
20140195749 | Colgrove et al. | Jul 2014 | A1 |
20150006814 | Phong et al. | Jan 2015 | A1 |
20150067286 | Colgrove et al. | Mar 2015 | A1 |
20150143065 | Lu et al. | May 2015 | A1 |
20150154107 | Nelson | Jun 2015 | A1 |
Number | Date | Country |
---|---|---|
102483754 | May 2012 | CN |
2010-026940 | Feb 2010 | JP |
2012-512482 | May 2012 | JP |
2014-514622 | Jun 2014 | JP |
2016157441 | Sep 2016 | JP |
2013095381 | Jun 2013 | WO |
2014130035 | Aug 2014 | WO |
Entry |
---|
Smith, et al., “Effects of Copy-On-Write Memory Management on the Response Time of UNIX fork operations”, Computer Science Department, Columbia University, Computing Systems, 1(3):255-278, No Month Given, 1988, 12 pages. |
Microsoft, “Info: Copy on Write Page Protection for Windows NT, Windows 2000, or Windows XP”, http://support.microsoft.com/kb/103858, printed Oct. 4, 2013, 2 pages. |
Copy-On-Write Snapshot, HDS: Logical, Change-Based, Point-in-Time Replicaiton—Hitachi Copy-on-Write Snapshot, www.hds.com/products/storage-software/copy-on-write-snapshot.html, printed Oct. 5, 2013, 1 page. |
Japanese Office Action for Application No. 2016-033374; dated Jan. 31, 2017; 4 total pages. |
Office action dated Jul. 26, 2016 for UK Application No. GB1601965.5. |
Office action dated Mar. 15, 2017 for Korean Patent Application No. 10-2016-0022000. |
Examination Report issued in corresponding German Patent Application No. 1601965.5, dated May 24, 2018 (3 pages). |
Office Action issued in corresponding Chinese Patent Application No. 201610104585.0, dated Jun. 7, 2018 (9 pages). |
Search Report issued in corresponding Chinese Patent Application No. 2016101045850, dated May 10, 2018 (4 pages). |
Office Action issued in corresponding Chinese Patent Application No. 201610104585.0, dated Dec. 24, 2018 (9 pages). |
Number | Date | Country | |
---|---|---|---|
20200150883 A1 | May 2020 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 15876245 | Jan 2018 | US |
Child | 16746663 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14630863 | Feb 2015 | US |
Child | 15876245 | US |