In modern operating systems, files may be referenced by file names. For example, in Unix family of operating systems, a file may be referenced by one or more names (e.g., hard links). Conversely, a “soft link” refers to a link to a file name, rather than to a file itself.
Files may be arranged in directories. A directory may contain a list of file names or links. The term “file” may also include directories, thus facilitating the existence of directory hierarchies, i.e., directories containing sub-directories. A file name may uniquely identify the file within the directory containing the file. The file name and the path to the directory containing the file may uniquely identify the file among all other files in the computer system.
The present disclosure is illustrated by way of examples, and not by way of limitation, and may be more fully understood with references to the following detailed description when considered in connection with the figures, in which:
Described herein are methods and systems for file replication using file content location identifiers in distributed file systems. In certain implementations, a distributed file system may be provided by a network attached storage (NAS) system comprising one or more file server computer systems each coupled to one or more persistent data storage devices, such as magnetic or optical storage disks, solid-state drives (SSDs), etc. “Computer system” or “computer” herein shall refer to a system comprising one or more processors, one or more memory devices, and one or more input/output (I/O) interfaces.
A file server may execute a network file system (NFS) server to manage file input/output (I/O) requests originated by NFS clients. One or more client computers can execute file system clients (e.g., NFS clients) to communicate with one or more file servers.
In certain implementations, a distributed file system may comprise two or more server clusters which may reside in geographically distributed locations. Data replication between geographically distributed clusters may be referred to as geo-replication. Volume-level replication may be performed from a cluster of the distributed file system that has been designated as a master to one or more clusters that have been designates as slaves. Volume-level replication may comprise various file system operations performed on a plurality of files comprised by a file system volume.
In certain implementations, volume-level replication may comprise a plurality of file system operations identifying each file or directory by its filename or directory name, respectively. However, if a file residing on the master file system has been renamed after being replicated to a slave file system, without modifying the file contents, the filename-based replication would, on the slave file system, result in deleting the existing file identified by the old name and copying the contents of the file from the master file system to the slave file system, in order to create a file with the new name on the slave. Since the replication agent identifies files by their file names, it has no means to determine that the copying operation is redundant, as the contents of the file were not changed.
Furthermore, identifying files and/or directories by their respective names may not always work correctly for files referenced by one or more hard links. “Hard link” herein refers to a directory entry that associates a name with a file. Certain file systems allow multiple hard links to be created for the same file, thus allowing multiple aliases for the file name, so that when the file is opened by referencing any of the hard links associated with the file, the changes that are made to the file contents will be visible when the file is opened by referencing any other hard links associated with the file. Deleting a file by referencing any of the hard links associated with the file, if there are two or more hard links referencing the same physical location on a storage device, would only delete the referenced hard link, but not the file contents or other hard links associated with the file. As a directory is a special type of a file, multiple hard links to directories are also possible, although this feature may not be enabled in certain operating systems.
In the process of file replication, if a file is referenced, on the master file system, by one or more hard links, then two or more copies of the file would be created on the slave file system by the replication agent, as the latter has no means to determine that the hard links identify the same physical location of the file on a storage device. Furthermore, if the contents of such a file are modified on the master file system by a process referencing the file by one of the hard links, then only the copy corresponding to that hard link would be modified on the slave file system by the replication agent, as the latter has no means to determine that the hard links identify the same physical location of the file on a storage device.
To address the above noted and other deficiencies, the present disclosure provides systems and methods for identifying files residing on the file system by unique identifiers associated with physical locations of the files on storage devices, rather than by the file names. The methods described herein may be employed by file replication and for certain other operations on distributed file systems (e.g., backup, self-healing, and storage media defect detection procedures).
Various aspects of the above referenced methods and systems are described in details herein below by way of examples, rather than by way of limitation.
One or more client computers 120 may be communicatively coupled, e.g., over a network 110, to file servers 140. A file server 140 may run a file system server daemon (or any other component such as a module or program) 142 to export a local file system to clients 120 as one or more volumes accessible by the clients.
Network 110 may be provided by one or more local area networks, one or more wide area networks, or any combination thereof. Client computer 120 may execute a file system client daemon 185 to connect to one or more servers 140 via an application-level protocol implemented over TCP/IP, InfiniB and or other transports, in order to access the file system volumes exported by one or more servers 140. Client computer 120 may further execute one or more applications 190.
In an illustrative example, cluster 150A may be designated as the master cluster, and cluster 150B may be designated as a slave cluster. In another illustrative example, there may be provided two or more slave clusters. In various examples, master cluster 150A and slave cluster 150B may have the same configuration of different configurations, with respect to servers, storage devices, and other cluster features. In certain implementations, master cluster 150A and one or more slave clusters 150B may reside in geographically distributed locations.
Replication agents 152A, 152B running on servers 140 may be configured to perform volume-level replication of master cluster 150A to one or more slave clusters 150B. In an illustrative example, replication agents 152A, 152B may constantly or periodically execute a background replication process to synchronize the file system volumes on master cluster 150A and one or more slave clusters 150B.
In certain implementations, the distributed file system may maintain a change log file reflecting the file system operations performed upon the file system objects (e.g., creation, modification, or deletion of files or directories) of the master cluster. Replication agent 152 may iterate through the change log records and perform, on one or more slave clusters 150B, the file operations specified by the change log record, as described in more details herein below.
In certain implementations, the replication agent may reference each file system object (a file or a directory) by an identifier of a data structure that comprises one or more identifiers of the physical locations of the contents of the file system object on a storage device, rather than identifying file system objects by their names, in order to avoid the above described redundant copying operations associated with file renaming and/or file aliasing by hard links. In an illustrative example, the replication agent may reference file system objects by identifiers of their index nodes.
“Index node” or “inode” herein shall refer to a data structure associated with a file system object (e.g., a file or a directory). An inode representing a file system object may comprise one or more identifiers of physical locations (e.g., disk blocks) that store the contents of the file system object. An inode may further comprise various attributes of the file system object, including manipulation metadata (e.g., file creation, access, and/or modification time), as well as owner and permission metadata (e.g., group identifier, user identifier, and/or permissions). An inode may be identified by its number.
In certain implementations, a plurality of inodes may be stored in an inode table residing in a known physical location on a storage device. The inode table may be indexed by the inode numbers, so that a file system driver may access the inode associated with a given file and retrieve the file physical location and/or file metadata. Alternatively, instead of implementing an inode table, certain file systems may store equivalent data in various other data structures.
In conventional file systems, when an inode is created, it may be assigned an arbitrary identifier (inode number), e.g., a random number. Hence, a file on the master cluster and a replica of the file on a slave cluster would have two different inode numbers, thus making it impractical for various clients (including, e.g., file replication agents) that need to access files on both master and slave clusters to reference the files by their inode numbers. The present disclosure resolves the issue of a file on the master cluster and its replica on a slave cluster being associated with two different inode numbers, by assigning the inode identifier associated with a particular file on the master cluster to the inode associated with a replica of the file on a slave cluster. Hence, both the file on the master cluster and the replica of the file on the slave cluster are associated with inodes having identical inode identifiers, thus enabling various clients (including, e.g., file replication agents) that need to access files on both master and slave clusters to reference the files by their inode numbers.
In certain implementations, for each file, the file system server may create a file name alias comprising an identifier of the respective inode, e.g., by creating a hard link with a name comprising the identifier of the inode referenced by the hard link. All such hard links may be placed in a pre-defined directory (e.g., a hidden directory) where they can be accessed by various clients (including, e.g., file replication agents), as schematically illustrated by
In the illustrative example of
In the illustrative example of
As noted herein above, the master file system may maintain a change log file reflecting the operations performed by file system clients upon the file system objects (e.g., creation, modification, or deletion of files). The change log may identify the file system objects by their respective inode identifiers. The change log may then be used by a volume-level replication agent, as well as by certain other procedures accessing the files (e.g., backup, self-healing, storage media defect detection procedures).
In an illustrative example, replication agent 152 may iterate through records of the change log file of master cluster 150A. For each change log record, replication agent 152 may construct a file name alias of the file referenced by its inode identifier. In an illustrative example, replication agent 152 may by append the inode identifier referenced by the change log record to a path to a pre-defined directory that stores file name aliases, where each file name alias comprises the identifier of the inode that stores the metadata for the file referenced by the file name alias, as described in more details herein above.
Upon constructing the file name alias for the file referenced by a change log record, replication agent 152 may perform, on one or more slave clusters 150B, the operations specified by the change log record. In an illustrative example, replication agent 152 may copy the file specified by the change log record from master cluster 150A to one or more slave clusters 150B. In various illustrative examples, replication agent 152 may delete, create, or rename, on one or more slave clusters 150B, the file specified by the change log record.
At block 310, a file system server may create file name aliases for a plurality of files of a file system, as described in more details herein above. In an illustrative example, for each file, a file name alias comprising an identifier of the respective inode may be created, e.g., by creating a hard link with a name comprising the identifier of the inode referenced by the hard link. All such hard links may be placed in a pre-defined directory (e.g., a hidden directory) where they can be accessed by various clients.
At block 320, a file replication agent running on the file system server may receive a change log file comprising a plurality of records.
At block 330, the replication agent may read a change log record identified by a file pointer associated with the change log file. The change log record may reflect one or more file system operations performed upon one or more file system objects (e.g., creation, modification, or deletion of files or directories). The change log record may identify the file system objects by their respective inode identifiers, as described in more details herein above.
At block 340, the replication agent may construct a file name alias of the file referenced by the change log record by its inode identifier. In an illustrative example, the replication agent may by append the inode identifier referenced by the change log record to a path to a pre-defined directory that stores file name aliases, where each file name alias comprises the identifier of the inode that stores the metadata for the file referenced by the file name alias, as described in more details herein above.
At block 350, the replication agent may perform the file system operations specified by the change log record. In performing the file system operations, the replication agent may reference the file by the file name alias, as described in more details herein above. In an illustrative example, the file system operations to be performed may comprise copying the file from a master file server to a slave file server. In another illustrative example, the file system operation to be performed may comprise deleting a replica of the file on the slave file server. In another illustrative example, the file system operation to be performed may comprise renaming the replica of the file on the slave file server.
At block 360, the replication agent may advance the pointer associated with the log file to point to the next log file record.
Responsive to determining, at block 370, that the end of log file has been reached, the method may terminate; otherwise, the method may loop back to block 330 to process the next change log record.
In one example, computer system 1000 may be connected (e.g., via a network, such as a Local Area Network (LAN), an intranet, an extranet, or the Internet) to other computer systems (e.g., other nodes). Computer system 1000 may operate in the capacity of a server or a client computer in a client-server environment, or as a peer computer in a peer-to-peer or distributed network environment. Computer system 1000 may be provided by a personal computer (PC), a tablet PC, a set-top box (STB), a Personal Digital Assistant (PDA), a cellular telephone, a web appliance, a server, a network router, switch or bridge, or any device capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that device. Further, the term “computer” shall include any collection of computers that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methods described herein.
In a further aspect, computer system 1000 may include a processor 1002, a volatile memory 1004 (e.g., random access memory (RAM)), a non-volatile memory 1006 (e.g., read-only memory (ROM) or electrically-erasable programmable ROM (EEPROM)), and a storage memory 1016 (e.g., a data storage device), which may communicate with each other via a bus 1008.
Processor 1002 may be provided by one or more processors such as a general purpose processor (such as, for example, a complex instruction set computing (CISC) microprocessor, a reduced instruction set computing (RISC) microprocessor, a very long instruction word (VLIW) microprocessor, a microprocessor implementing other types of instruction sets, or a microprocessor implementing a combination of types of instruction sets) or a specialized processor (such as, for example, an application specific integrated circuit (ASIC), a field programmable gate array (FPGA), a digital signal processor (DSP), or a network processor).
Computer system 1000 may further include a network interface device 1022. Computer system 1000 also may include a video display unit 1010 (e.g., an LCD), an alphanumeric input device 1012 (e.g., a keyboard), a pointing device 1014 (e.g., a mouse), and an audio output device 1020 (e.g., a speaker).
In an illustrative example, secondary memory 1016 may include a tangible computer-readable storage medium 1024 on which may be stored instructions 1054 encoding file system server daemon 142 implementing method 300 for file replication using file content location identifiers. In an illustrative example, secondary memory 1016 may include a tangible computer-readable storage medium 1024 on which may be stored instructions 1054 encoding replication agent 152 implementing method 300 for file replication using file content location identifiers. Instructions 1054 may also reside, completely or partially, within main memory 1004 and/or within processor 1002 during execution thereof by computer system 1000, hence, main memory 1004 and processor 1002 may also constitute machine-readable storage media.
While computer-readable storage medium 1024 is shown in the illustrative examples as a single medium, the term “computer-readable storage medium” shall include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of executable instructions. The term “computer-readable storage medium” shall also include any tangible medium that is capable of storing or encoding a set of instructions for execution by a computer that cause the computer to perform any one or more of the methods described herein. The term “computer-readable storage medium” shall include, but not be limited to, solid-state memories, optical media, and magnetic media.
The methods, components, and features described herein may be implemented by discrete hardware components or may be integrated in the functionality of other hardware components such as ASICS, FPGAs, DSPs or similar devices. In addition, the methods, components, and features may be implemented by firmware modules or functional circuitry within hardware devices. Further, the methods, components, and features may be implemented in any combination of hardware devices and software components, or only in software.
Unless specifically stated otherwise, terms such as “updating”, “identifying”, “determining”, “sending”, “assigning”, or the like, refer to actions and processes performed or implemented by computer systems that manipulates and transforms data represented as physical (electronic) quantities within the computer system registers and memories into other data similarly represented as physical quantities within the computer system memories or registers or other such information storage, transmission or display devices.
Examples described herein also relate to an apparatus for performing the methods described herein. This apparatus may be specially constructed for performing the methods described herein, or it may comprise a general purpose computer system selectively programmed by a computer program stored in the computer system. Such a computer program may be stored in a computer-readable tangible storage medium.
The methods and illustrative examples described herein are not inherently related to any particular computer or other apparatus. Various general purpose systems may be used in accordance with the teachings described herein, or it may prove convenient to construct more specialized apparatus to perform method 300 and/or each of its individual functions, routines, subroutines, or operations. Examples of the structure for a variety of these systems are set forth in the description above.
The above description is intended to be illustrative, and not restrictive. Although the present disclosure has been described with references to specific illustrative examples and implementations, it will be recognized that the present disclosure is not limited to the examples and implementations described. The scope of the disclosure should be determined with reference to the following claims, along with the full scope of equivalents to which the claims are entitled.
This application is a continuation of U.S. patent application Ser. No. 14/219,250 filed on Mar. 19, 2014, titled “File Replication Using File Content Location Identifiers,” the entire content of which is incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
5511177 | Kagimasa et al. | Apr 1996 | A |
5544360 | Lewak et al. | Aug 1996 | A |
5627996 | Bauer | May 1997 | A |
5689706 | Rao et al. | Nov 1997 | A |
5745888 | Bauer | Apr 1998 | A |
5886699 | Belfiore et al. | Mar 1999 | A |
5951649 | Dobbins et al. | Sep 1999 | A |
6029168 | Frey | Feb 2000 | A |
6061678 | Klein et al. | May 2000 | A |
6421686 | Martin, Jr. | Jul 2002 | B1 |
6466980 | Lumelsky et al. | Oct 2002 | B1 |
6473767 | Bailey et al. | Oct 2002 | B1 |
6697846 | Soltis | Feb 2004 | B1 |
6738790 | Klein et al. | May 2004 | B1 |
6856993 | Verma et al. | Feb 2005 | B1 |
7080102 | O'Connell et al. | Jul 2006 | B2 |
7107419 | Ghemawat et al. | Sep 2006 | B1 |
7203731 | Coates et al. | Apr 2007 | B1 |
7275177 | Armangau et al. | Sep 2007 | B2 |
7328319 | Bottomley | Feb 2008 | B1 |
7418439 | Wong | Aug 2008 | B2 |
7590667 | Yasuda et al. | Sep 2009 | B2 |
7617259 | Muth et al. | Nov 2009 | B1 |
7739240 | Saito et al. | Jun 2010 | B2 |
7885923 | Tawri et al. | Feb 2011 | B1 |
7890469 | Maionchi et al. | Feb 2011 | B1 |
7890632 | Hazlewood et al. | Feb 2011 | B2 |
7921268 | Jakob | Apr 2011 | B2 |
7941709 | Hong et al. | May 2011 | B1 |
7962458 | Holenstein et al. | Jun 2011 | B2 |
8150805 | Tawri et al. | Apr 2012 | B1 |
8180747 | Marinkovic | May 2012 | B2 |
8190850 | Davenport et al. | May 2012 | B1 |
8234317 | Pogde | Jul 2012 | B1 |
8301597 | Zhou et al. | Oct 2012 | B1 |
8321380 | Leverett | Nov 2012 | B1 |
8484259 | Makkar | Jul 2013 | B1 |
8977602 | Shoens | Mar 2015 | B2 |
8983908 | Gowda et al. | Mar 2015 | B2 |
9043567 | Modukuri | May 2015 | B1 |
9104675 | Clark et al. | Aug 2015 | B1 |
9110917 | Avati et al. | Aug 2015 | B2 |
9304815 | Hunter | Apr 2016 | B1 |
9986029 | Avati | May 2018 | B2 |
20020107874 | DeLorme et al. | Aug 2002 | A1 |
20020133491 | Sim et al. | Sep 2002 | A1 |
20020194015 | Gordon et al. | Dec 2002 | A1 |
20030149709 | Banks | Aug 2003 | A1 |
20030159006 | Frank et al. | Aug 2003 | A1 |
20030163568 | Kano et al. | Aug 2003 | A1 |
20030182257 | O'Connell et al. | Sep 2003 | A1 |
20030182328 | Paquette et al. | Sep 2003 | A1 |
20030191745 | Jiang | Oct 2003 | A1 |
20030217119 | Raman et al. | Nov 2003 | A1 |
20040066741 | Dinker et al. | Apr 2004 | A1 |
20040128556 | Burnett | Jul 2004 | A1 |
20040139128 | Becker et al. | Jul 2004 | A1 |
20040205152 | Yasuda | Oct 2004 | A1 |
20040250029 | Ji et al. | Dec 2004 | A1 |
20040260726 | Hrle et al. | Dec 2004 | A1 |
20040260972 | Ji et al. | Dec 2004 | A1 |
20040260976 | Ji et al. | Dec 2004 | A1 |
20050027748 | Kisley | Feb 2005 | A1 |
20050071708 | Bartlai et al. | Mar 2005 | A1 |
20050114285 | Cincotta | May 2005 | A1 |
20050144202 | Chen | Jun 2005 | A1 |
20050160427 | Ustaris | Jul 2005 | A1 |
20050193245 | Hayden et al. | Sep 2005 | A1 |
20050204106 | Testardi | Sep 2005 | A1 |
20050289152 | Earl et al. | Dec 2005 | A1 |
20060059204 | Borthakur et al. | Mar 2006 | A1 |
20060218210 | Sarma et al. | Sep 2006 | A1 |
20060259527 | Devarakonda et al. | Nov 2006 | A1 |
20070011213 | Burton et al. | Jan 2007 | A1 |
20070022129 | Bahar et al. | Jan 2007 | A1 |
20070038689 | Shinkai | Feb 2007 | A1 |
20070055702 | Fridella et al. | Mar 2007 | A1 |
20070067584 | Muto | Mar 2007 | A1 |
20070106712 | Yamato et al. | May 2007 | A1 |
20070124271 | Bauchot et al. | May 2007 | A1 |
20070156506 | Hara | Jul 2007 | A1 |
20070185852 | Erofeev | Aug 2007 | A1 |
20070198550 | Irving et al. | Aug 2007 | A1 |
20070245112 | Grubbs et al. | Oct 2007 | A1 |
20070288533 | Srivastava et al. | Dec 2007 | A1 |
20070299955 | Hoffman et al. | Dec 2007 | A1 |
20080010322 | Lee et al. | Jan 2008 | A1 |
20080016300 | Yim et al. | Jan 2008 | A1 |
20080109908 | Havens et al. | May 2008 | A1 |
20080126833 | Callaway et al. | May 2008 | A1 |
20080201366 | Devarakonda et al. | Aug 2008 | A1 |
20080209145 | Ranganathan et al. | Aug 2008 | A1 |
20080235300 | Nemoto et al. | Sep 2008 | A1 |
20090119302 | Palmer et al. | May 2009 | A1 |
20090150398 | Raut | Jun 2009 | A1 |
20090193107 | Srinivasan et al. | Jul 2009 | A1 |
20090235115 | Butlin et al. | Sep 2009 | A1 |
20090254592 | Marinov et al. | Oct 2009 | A1 |
20090276470 | Vijayarajan et al. | Nov 2009 | A1 |
20090307245 | Mullen et al. | Dec 2009 | A1 |
20100005072 | Pitts | Jan 2010 | A1 |
20100107091 | Amsterdam et al. | Apr 2010 | A1 |
20100185585 | Schuchardt | Jul 2010 | A1 |
20100191884 | Holenstein et al. | Jul 2010 | A1 |
20100332456 | Prahlad et al. | Dec 2010 | A1 |
20110161294 | Vengerov et al. | Jun 2011 | A1 |
20110246430 | Prahlad et al. | Oct 2011 | A1 |
20110295804 | Erofeev | Dec 2011 | A1 |
20110313971 | Hironaga et al. | Dec 2011 | A1 |
20120117033 | Vaidya | May 2012 | A1 |
20120136830 | Patocka | May 2012 | A1 |
20120151245 | Chang et al. | Jun 2012 | A1 |
20120151250 | Saika | Jun 2012 | A1 |
20120166390 | Merriman et al. | Jun 2012 | A1 |
20120185926 | Topatan et al. | Jul 2012 | A1 |
20120209898 | Leigh | Aug 2012 | A1 |
20120330894 | Slik | Dec 2012 | A1 |
20130024722 | Kotagiri et al. | Jan 2013 | A1 |
20130054524 | Anglin et al. | Feb 2013 | A1 |
20130173530 | Laron | Jul 2013 | A1 |
20130311421 | Erdogan et al. | Nov 2013 | A1 |
20130325804 | Bachar et al. | Dec 2013 | A1 |
20140019413 | Braam et al. | Jan 2014 | A1 |
20140122428 | Zhou et al. | May 2014 | A1 |
20140201177 | Suryanarayan et al. | Jul 2014 | A1 |
20150248434 | Avati et al. | Sep 2015 | A1 |
20150269213 | Avati | Sep 2015 | A1 |
20150269214 | Avati | Sep 2015 | A1 |
Entry |
---|
USPTO, Office Action for U.S. Appl. No. 14/218,250, dated Dec. 4, 2015. |
USPTO, Final Office Action for U.S. Appl. No. 14/218,250, dated Mar. 23, 2016. |
USPTO, Office Action for U.S. Appl. No. 14/219,250, dated Jul. 1, 2016. |
USPTO, Final Office Action for U.S. Appl. No. 14/219,250, dated Oct. 21, 2016. |
USPTO, Office Action for U.S. Appl. No. 14/219,250, dated Feb. 13, 2017. |
USPTO, Final Office Action for U.S. Appl. No. 14/219,250, dated Jun. 30, 2017. |
USPTO, Office Action for U.S. Appl. No. 14/219,250, dated Sep. 5, 2017. |
USPTO, Advisory Action for U.S. Appl. No. 14/219,250, dated Jun. 8, 2016. |
USPTO, Advisory Action for U.S. Appl. No. 14/219,250, dated Jan. 6, 2017. |
USPTO, Notice of Allowance for U.S. Appl. No. 14/219,250, dated Jan. 31, 2018. |
USPTO, Final Office Action for U.S. Appl. No. 14/219,251, dated Nov. 23, 2016. |
USPTO, Office Action for U.S. Appl. No. 14/219,251, dated Mar. 24, 2017. |
USPTO, Advisory Action for U.S. Appl. No. 14/219,251, dated Jan. 30, 2017. |
USPTO, Final Office Action for U.S. Appl. No. 14/219,251, dated Jul. 11, 2017. |
USPTO, Advisory Action for U.S. Appl. No. 14/219,251, dated Sep. 12, 2017. |
USPTO, Office Action for U.S. Appl. No. 14/219,251, dated Nov. 21, 2017. |
USPTO, Final Office Action for U.S. Appl. No. 14/219,255, dated Jan. 12, 2017. |
USPTO, Office Action for U.S. Appl. No. 14/219,255, dated May 17, 2017. |
USPTO, Final Office Action for U.S. Appl. No. 14/219,255, dated Sep. 7, 2017. |
USPTO, Office Action for U.S. Appl. No. 14/193,581, dated Apr. 20, 2016. |
USPTO, Final Office Action for U.S. Appl. No. 14/193,581, dated Sep. 20, 2016. |
USPTO, Office Action for U.S. Appl. No. 14/193,581, dated Jan. 9, 2017. |
USPTO, Final Office Action for U.S. Appl. No. 14/193,581, dated Jul. 10, 2017. |
USPTO, Final Office Action for Application No. 14/219,251, dated Apr. 8, 2016. |
USPTO, Office Action for U.S. Appl. No. 14/219,251, dated Aug. 11, 2016. |
USPTO, Advisory Action for U.S. Appl. No. 14/219,251, dated Jun. 10, 2016. |
USPTO, Final Office Action for U.S. Appl. No. 14/219,255, dated Apr. 8, 2016. |
USPTO, Office Action for U.S. Appl. No. 14/219,255, dated Aug. 25, 2016. |
USPTO, Advisory Action for U.S. Appl. No. 14/219,255, dated Jun. 10, 2016. |
“Change 175575fb7: Features/Changelog: Changelog Translator”, Review.gluster.org, Updated Jul. 22, 2013, 6 p pages, http://review.gluster.org/#/c/5127/. |
Avati, Anand, “Compacting Change Logs Using File Content Location Identifiers”, U.S. Appl. No. 14/219,251, filed Mar. 19, 2014. |
Avati, Anand, “Identifying Files in Change Logs Using File Content Location Identifiers”, U.S. Appl. No. 14/219,255, filed Mar. 19, 2014. |
Avati, Anand Vishweswaran and Karampuri, Pranith Kumar, “Delayed Asynchronous File Replication in a Distributed File System”, U.S. Appl. No. 14/193,581, filed Feb. 28, 2014. |
Birrell, Andrew D. et al., “The Echo Distributed File System”, Sep. 10, 1993, 26 Pages http://www.hpl.hp.com/techreports/Compaq-DEC/SRC-RR-111.pdf. |
Matheis, Johannes and Mussig, Michael, “Bounded Delay Replication in Distributed Databases with Eventual Consistency”, Dec. 17, 2003, 100 Pages, http://www.diva-portal.org/smash/get/diva2:3243/FULLTEXT02.pdf. |
USPTO, Office Action for U.S. Appl. No. 14/219,251 dated Dec. 23, 2015. |
USPTO, Office Action for U.S. Appl. No. 14/219,255 dated Dec. 17, 2015. |
Business Wire, “Gluster Announces Apache Hadoop Storage Compatibility in Latest GlusterFS Release”, Published Aug. 23, 2011, Available at <http://fwww.businesswire.com/news/home/20110823005899/en/Giuster-Announces-Apache-Hadoop-Storage-Compatibility-Latest>, retreived Jan. 18, 2013. |
Gluster Inc., “Gluster Filesystem Unified File and Object Storage—Beta 2”, Published Aug. 2011, pp. 1-27, Available at <http://hypnotoad.uchicago.edu/roll-documentation/glusterfs/6.0/Gluster_Unified_File_and_Object_Storage_pdf>, retrieved Jan. 18, 2013. |
Golub, Ben, “Why I believe in UFOS: Unified file and object storage,” Computerworld Blogs, Jul. 28, 2011, 6 pages. |
Raghavendra, G., “Change le8ddc0fb: Fuse: Auxiliary GFID Mount Support”, Gluster.org, updated Jul. 19, 2013, 5 pages, http://review.gluster.org/#/c/4702/. |
Avati, Anand and Suryanaraya, Amar Tumballi, “Identifying Files in Change Logs Using File Content Location Identifiers”, U.S. Appl. No. 14/219,255, filed Mar. 19, 2014. |
USPTO, Office Action for Application No. 14/193,581, dated Mar. 20, 2018. |
USPTO, Final Office Action for U.S. Appl. No. 14/193,581, dated Oct. 11, 2018. |
USPTO, Office Action for U.S. Appl. No. 14/193,581, dated May 14, 2019. |
USPTO, Final Office Action for U.S. Appl. No. 14/193,581, dated Sep. 26, 2019. |
USPTO, Office Action for U.S. Appl. No. 14/193,581, dated May 27, 2020. |
USPTO, Advisory Action for U.S. Appl. No. 14/193,581, dated Jan. 17, 2019. |
USPTO, Advisory Action for U.S. Appl. No. 14/193,581, dated Jan. 3, 2020. |
USPTO, Final Office Action for U.S. Appl. No. 14/193,581, dated Oct. 14, 2020. |
USPTO, Notice of Allowance for U.S. Appl. No. 14/193,581, dated Jan. 22, 2021. |
Number | Date | Country | |
---|---|---|---|
20180213035 A1 | Jul 2018 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14219250 | Mar 2014 | US |
Child | 15925171 | US |