The described technology is directed to data access, consistency, mobility, and modification in the field of data storage systems, including file systems.
The demand for scalable storage resources and the ability to provide rapid access to content stored thereby is a key concern to end-users. Enterprises, businesses, and individuals alike now use large scale systems to store data that is remotely accessible via a network. Such systems are often accessible via closed (e.g., enterprise) and open (e.g., Internet) networks and allow concurrent access via multiple client devices. Various implementations of large scale systems relying on network access have been developed. In each implementation, the systems are subject to system backups, hardware updates, and hardware failure.
In order to protect data from loss due to, for example, hardware failures, a technique called “mirroring” is sometimes used: two or more physical copies of the data are maintained in two or more physical locations, such as on differing hardware storage devices. This may be done using a variety of techniques providing associated logical addresses to those copies, such as mirrored discs, RAID systems, and other similar techniques implemented in networked data storage system.
The inventors have recognized significant disadvantages of conventional storage systems. To ensure consistency on a data storage system during both reads and writes on the client (e.g., computing devices communicating with the data storage system) and server side, data stored by conventional storage systems is often inaccessible to the client during system backups, hardware updates, and hardware failures. Even if the data is accessible during these times, e.g., a hardware failure, the data is often locked and cannot be written to by a client. Commit latency is also a problem occurring in common storage systems, because each write is first prepared and then committed to the system to ensure a successful commit and data consistency across servers and client devices.
In response to recognizing these deficiencies of conventional storage systems, the inventors have conceived and reduced to practice a transactional block data system in which data is made available in at least two logical locations. This system may be implemented, for example, in a file system, a block storage device over a block protocol (e.g., iSCSI), a database, or an object store, and so on. Methods allowing for continuous write access to the data at a logical location during system failures can then be implemented. With this backup copy of data created, various additional methods are implemented to improve system performance and efficiency. For example, one method includes replicating a backup copy to create a second, additional backup copy when a storage device becomes unavailable. This additional backup copy is then utilized to provide continual access to the data when that storage device is unavailable. In another method, creation of an additional data copy is used to move data across various storage devices in the data storage system. In yet another method, the data copy is merged with other data in the data storage system to consolidate the data on a hardware storage device. Each of these methods is further discussed below with reference to a file system. However, in various embodiments the transactional block data storage system is implemented in systems of a variety of other types.
The layers within the pstores further reference two or more bstore IDs, each of which identifies a block storage unit (bstore) located on a particular computer node 118 and a particular hardware storage device 116 associated with that particular computer node 118. The two or more referenced bstores in each layer provide the physical locations of the mirrored data. Accordingly, a single layer in a pstore references physical locations in the data storage system containing the same data. That single layer is a logical location in the data storage system that is accessible via a logical address. The data storage map 112, also referred to as the pstore to bstore map (pb-map), may be stored on a paxos or similar system capable of facilitating atomic transactions across every computer node in the data storage system. The paxos system may also be used to facilitate maintaining synchronized copies of the data storage map 112 on each computer node 118.
At the lowest layer in
As discussed with reference to
Because
As previously mentioned, for each pstore ID entry in the pb-map there may be one or more layers. The top layer, i.e., layer 1, is the only writeable layer in any given pstore. Accordingly, to write to a specific pstore with a pstore address (paddr=pstore ID, offset), the identified pstore ID identified in the paddr is first looked up in the pb-map. Once found, the pstore is used to identify the associated bstore IDs in the top layer are then identified. The system would then write the data intended for paddr to the bstores referenced by the identified bstore IDs at the offset specified in the paddr. For example, in some embodiments, to write to a paddr (pstore ID=1, offset=56), pstore ID=1 is looked up in the pb-map. The bstore IDs in the top layer are then identified. Referring back to
To perform a read of the data at a particular paddr (pstore ID, offset), the pstore ID identified in the paddr is first looked up in the pb-map stored in the address abstraction layer of the data storage system. The associated bstore IDs in the top layer of the identified pstore are then identified and an attempt is made to read the data from one of those bstores referenced by the corresponding bstore IDs at the offset specified in the paddr. If that data block is found in the bstore, the data is returned in response to the read request. If the identified bstore ID is unavailable, or there is another error, a read attempt is made on a bstore referenced by another bstore ID referenced in the same layer. A read may be attempted sequentially for all bstore IDs identified in the layer until an available bstore is found.
In some embodiments, an available bstore returns the block of data or a message to the effect of “I don't have it.” If the available bstore does not have the data, the next layer in the same pstore is referenced and a new set of bstore IDs identified. Again, a read may be attempted sequentially for all referenced bstores in this next layer until an available bstore is found. In an illustrative and non-limiting example, a read for the paddr (pstore ID=1, offset=56) is looked up in the pb-map. The bstore IDs in the top layer are then identified. As shown in
In some embodiments, each bstore ID has a corresponding physical hardware address associated with a computer node, a data storage device on a computer node, and a disc object at which is located a super block for that bstore. This information may be embedded as a tuple in the pb-map, and looked up in an external data structure. The super block may comprise a link to a write ahead log and a link to a data structure comprising disc address pointers or offsets corresponding to associated protected data blocks. The data structure may comprise an index table, a hash map, a b-tree or any common method of mapping between two integers. The offset in the paddr is used to access the data structure and identify the disc address pointer at which is located the protected data block. The link to the write ahead log may point to a linked list of log entries comprising the write ahead log, WAL. In some embodiments, the WAL may be implemented as a linked list, a linear log, or any other representation of a log. The log entries may comprise a transaction ID and one or more offsets together with their associated disc address pointer which points to the data block which has been written out of place on the same hardware storage device on which the bstore is located.
In some embodiments, when a write request is sent in a commit request to a particular bstore ID, space on the indicated computer node and data storage device (i.e., disc) is allocated and the data is written to the data storage device (e.g., disc1, disc2, disc3 in
Upon receipt of a positive acknowledgement from all the nodes, a “commit” message is sent to all the nodes including the data to be written to each data block. Subsequently, upon receipt of positive commit acknowledgement from all nodes, the data is considered durably written to disc. While this approach ensures that the data is written successfully to disc prior to sending confirmation to the file system, the two phase nature of the approach requires two round trip communication loops across a data storage system, such as a cluster network, before the data write is confirmed. This can create delays in the system and reduce perceived performance of the data storage system relative to a single-phase commit, which is described in the following paragraphs with reference to
For example, in
As discussed in the previous embodiments, it may be possible for a component of the data storage cluster such as a computer node or hardware storage device to fail or become unavailable partway through a write. In embodiments, if the response state of a bstore to a write request is unknown due to an unavailability of a computer node or hardware storage device associated with that bstore, the response may be assumed to have been positive. It may be assumed to have been positive because, if a positive response was sent prior to the failure and a positive response was received from the other bstores, the file system may assume that the data has been durably written. This ensures that the data storage is consistent with the file system view.
In some embodiments, upon recovery from a system error or upon system start-up, one node may be the “recovery leader.” This node may ask every bstore in the system to provide a list of log entry in its write ahead log (WAL). This information may be used to build a transaction status table.
For example, in
For transaction ID=13, BSTORE ID=6 has a positive transaction status but an unknown transaction status for BSTORE ID=2. Because the transaction status is unknown for BSTORE ID=2, but the remainder of the transaction status responses are positive, it is possible that BSTORE ID=2 returned a positive, which would have resulted in an affirmation of a durable write being returned to the client. Therefore, to keep the file system and data storage consistent, transaction ID=13 must be rolled forward. This may be done using a cleaning kit. As previously mentioned, a cleaning kit comprises the data needed to bring a bstore to a known state. In embodiments described herein, a cleaning kit is generated on a node other than the node on which the corresponding bstore is located. In some embodiments, the cleaning kit is generated on the same node on which the unavailable bstore is located, but on a different hardware storage device (i.e., disc) within that node. Furthermore, although the previous example illustrates a transaction limited to a single pstore, it should be understood that a single transaction can, and often does, affect multiple pstores. In some embodiments, the write requests received from clients for each pstore are bundled together in a single commit request, and numerous of those commit requests may be included in a single transaction. A single transaction includes a plurality of commit requests intended for any number of pstores and, consequently, any number of bstores.
In some embodiments, upon system restart, the file system may search the pb-map to identify bstore IDs referencing a failed or unavailable computer node or hardware storage device. When such a bstore ID is identified, a cleaning kit is created from one or more of the remaining bstores, in the same layer associated the particular pstore ID. The cleaning kit may include information regarding the in-process transactions to be rolled forward such as transaction ID, offset, and data to be written. There may be rules regarding the location of the cleaning kit such as not on the same node as the remaining bstore used to create the cleaning kit, not on the same node as the unavailable bstore and the like. The cleaning kit is referenced by a cleaning kit ID in the pb-map. The cleaning kit ID includes a node, a disc (i.e., a, hardware storage device), and an object. The cleaning kit is then stored in the pb-map in the same layer of the pstore in which the information regarding the unavailable bstore. The cleaning kit is then used to update the unavailable bstore with the data received in any new write request when that bstore becomes available.
Upon application of the cleaning kit, the protection is again consistent. For example, in a parity bstore, after cleaning kit is applied, the parity stripe is again consistent. In a mirrored protection scheme, once the cleaning kit applied, the updated bstore may be in a state where it mirrors the other bstores in the same layer and the protection is consistent.
In
Referring now to
In some embodiments, once the cleaning kit 912 is created, a new layer 1 is automatically added to pstore1902 since only the top layer of the pstore can be written to during a transaction. This ensures that any new data can be received by the pstore during the process of data restoration through the cleaning kit. In other embodiments, once the cleaning kit 912 is created, a new top layer, e.g., layer 1, is added on demand, when a new write request is received for that particular pstore. The new layer 1 can includes at least two new bstores, B5905 and B7910 and corresponding bstore IDs in the pb-map 900. In some embodiments, at least one of the bstores, e.g., B5 or B7, is on the same node and hardware storage devices as one of the remaining bstores in the next underlying layer. For example, bstore B5908 in layer 1 and bstore B6 in layer 2 are both stored on Node 2, Disc 3. All new writes to pstore1902 are then written to the new bstores in the new layer 1. The information in the previous layer 1 is then logically stored in layer 2, as shown in
As illustrated in
In some embodiments, the unavailable bstore B10904 becomes available once again before the copy 914 of the remaining bstore B6 is complete. As shown in
As shown in
In
In
Once the new bstores, B47 and B48, have been created and populated with the merged data, new corresponding logical addresses, or bstore IDs may be allocated to the new bstores and added to the pb-map in a single layer referencing those bstore IDs. The other bstores, e.g., B35, B10, B7, and lower layers are then removed from the pb-map 1110 as shown in
As shown in
In
In
Next, in
In
Referring now to
In
In the examples above, new bstores are created in which to merge data. However, this is merely illustrative and not intended to be limiting. Other variations may comprise merging data from a lower layer into an upper layer, reassigning the bstore ID offset in the upper layer to point to the new bstore rather than allocating a new bstore ID.
While only a few embodiments of the present invention have been shown and described, it will be obvious to those skilled in the art that many changes and modifications may be made thereto without departing from the spirit and scope of the present disclosure as described in the following claims. All patent applications and patents, both foreign and domestic, and all other publications referenced herein are incorporated herein in their entireties to the full extent permitted by law.
From the foregoing, it will be appreciated that specific embodiments of the invention have been described herein for purposes of illustration, but that various modifications may be made without deviating from the scope of the invention. Accordingly, the invention is not limited except as by the appended claims.
This Utility patent application is a Continuation of U.S. patent application Ser. No. 14/658,015 filed on Mar. 13, 2015, now U.S. Pat. No. 10,095,708 issued on Oct. 9, 2018, which is based on previously filed U.S. Provisional Patent Application Nos. 61/982,926 and 61/982,931, both filed on Apr. 23, 2014, the benefit of the filing dates of which are claimed under 35 U.S.C § 120 and § 119(e), and the contents of which are each further incorporated in entirety by reference.
Number | Name | Date | Kind |
---|---|---|---|
5165031 | Pruul et al. | Nov 1992 | A |
5319773 | Britton et al. | Jun 1994 | A |
5410684 | Ainsworth et al. | Apr 1995 | A |
5410719 | Shackleford | Apr 1995 | A |
5442561 | Yoshizawa et al. | Aug 1995 | A |
5953719 | Kleewein et al. | Sep 1999 | A |
6236996 | Bapat et al. | May 2001 | B1 |
6385641 | Jiang et al. | May 2002 | B1 |
6415283 | Conklin | Jul 2002 | B1 |
6496944 | Hsiao et al. | Dec 2002 | B1 |
6529998 | Yochai et al. | Mar 2003 | B1 |
6772435 | Thexton et al. | Aug 2004 | B1 |
6874130 | Baweja et al. | Mar 2005 | B1 |
6892211 | Hitz et al. | May 2005 | B2 |
6965903 | Agarwal et al. | Nov 2005 | B1 |
7213040 | Stokes et al. | May 2007 | B1 |
7636743 | Erofeev | Dec 2009 | B2 |
7693876 | Hackworth et al. | Apr 2010 | B2 |
7844580 | Srivastava et al. | Nov 2010 | B2 |
7937421 | Mikesell et al. | May 2011 | B2 |
7962709 | Agrawal | Jun 2011 | B2 |
8027827 | Bitar et al. | Sep 2011 | B2 |
8046378 | Zhuge | Oct 2011 | B1 |
8108429 | Sim-Tang et al. | Jan 2012 | B2 |
8296312 | Leung et al. | Oct 2012 | B1 |
8364648 | Sim-Tang | Jan 2013 | B1 |
8423733 | Ozdemir | Apr 2013 | B1 |
8448170 | Wipfel et al. | May 2013 | B2 |
8463825 | Harty et al. | Jun 2013 | B1 |
8489656 | Erofeev | Jul 2013 | B2 |
8504733 | Iyer et al. | Aug 2013 | B1 |
8515911 | Zhou | Aug 2013 | B1 |
8612404 | Bone et al. | Dec 2013 | B2 |
8612488 | Subramanya et al. | Dec 2013 | B1 |
8645323 | Jackiewicz et al. | Feb 2014 | B2 |
8661447 | Olliff et al. | Feb 2014 | B1 |
8776050 | Plouffe et al. | Jul 2014 | B2 |
8782655 | Blanding | Jul 2014 | B2 |
8806154 | Gupta | Aug 2014 | B1 |
8838887 | Burke et al. | Sep 2014 | B1 |
8838931 | Marshak et al. | Sep 2014 | B1 |
8849764 | Long et al. | Sep 2014 | B1 |
8868797 | Kirac et al. | Oct 2014 | B1 |
8972694 | Dolan et al. | Mar 2015 | B1 |
9015214 | Nishida et al. | Apr 2015 | B2 |
9026765 | Marshak et al. | May 2015 | B1 |
9047017 | Dolan et al. | Jun 2015 | B1 |
9143379 | Berger et al. | Sep 2015 | B1 |
9158653 | Gold | Oct 2015 | B2 |
9171145 | Dash | Oct 2015 | B2 |
9244975 | Das et al. | Jan 2016 | B2 |
9244976 | Zhang et al. | Jan 2016 | B1 |
9384252 | Akirav et al. | Jul 2016 | B2 |
9501487 | Yuan et al. | Nov 2016 | B1 |
9600193 | Ahrens | Mar 2017 | B2 |
9753782 | Fang et al. | Sep 2017 | B2 |
9753932 | Brow et al. | Sep 2017 | B1 |
9785377 | Shin et al. | Oct 2017 | B2 |
10140185 | Lopez et al. | Nov 2018 | B1 |
10318401 | Rothschilds | Jun 2019 | B2 |
10437509 | Alexeev | Oct 2019 | B1 |
20010039622 | Hitz et al. | Nov 2001 | A1 |
20020083073 | Vaidya et al. | Jun 2002 | A1 |
20020099691 | Lore et al. | Jul 2002 | A1 |
20030033308 | Patel et al. | Feb 2003 | A1 |
20030145009 | Forman et al. | Jul 2003 | A1 |
20030177379 | Hori | Sep 2003 | A1 |
20030182313 | Federwisch et al. | Sep 2003 | A1 |
20040153479 | Mikesell | Aug 2004 | A1 |
20040255048 | Lev Ran et al. | Dec 2004 | A1 |
20050015674 | Haugh | Jan 2005 | A1 |
20050027748 | Kisley | Feb 2005 | A1 |
20050091663 | Bagsby | Apr 2005 | A1 |
20050114726 | Ouchi | May 2005 | A1 |
20050119996 | Ohata et al. | Jun 2005 | A1 |
20050154866 | Steely, Jr. et al. | Jul 2005 | A1 |
20050195660 | Kavuri et al. | Sep 2005 | A1 |
20050223019 | Das | Oct 2005 | A1 |
20060004890 | Semple et al. | Jan 2006 | A1 |
20060053139 | Marzinski et al. | Mar 2006 | A1 |
20060089982 | Abbott et al. | Apr 2006 | A1 |
20060123005 | Burnett et al. | Jun 2006 | A1 |
20060173842 | Horvitz et al. | Aug 2006 | A1 |
20060271604 | Shoens | Nov 2006 | A1 |
20070011302 | Groner | Jan 2007 | A1 |
20070027985 | Ramany et al. | Feb 2007 | A1 |
20070100855 | Kohl | May 2007 | A1 |
20070118561 | Idicula et al. | May 2007 | A1 |
20080028006 | Liu et al. | Jan 2008 | A1 |
20080059399 | DeLorme et al. | Mar 2008 | A1 |
20080059541 | Fachan et al. | Mar 2008 | A1 |
20080082593 | Komarov et al. | Apr 2008 | A1 |
20080172366 | Hannel et al. | Jul 2008 | A1 |
20080228772 | Plamondon | Sep 2008 | A1 |
20080250357 | Lee et al. | Oct 2008 | A1 |
20080256474 | Chakra et al. | Oct 2008 | A1 |
20080270469 | Myerson et al. | Oct 2008 | A1 |
20080270928 | Chakra et al. | Oct 2008 | A1 |
20080282244 | Wu et al. | Nov 2008 | A1 |
20080288306 | MacIntyre et al. | Nov 2008 | A1 |
20080301256 | McWilliams et al. | Dec 2008 | A1 |
20080313217 | Dunsmore et al. | Dec 2008 | A1 |
20090077087 | Urano et al. | Mar 2009 | A1 |
20090138500 | Yuan et al. | May 2009 | A1 |
20090199190 | Chen et al. | Aug 2009 | A1 |
20090222509 | King et al. | Sep 2009 | A1 |
20090274047 | Kruys et al. | Nov 2009 | A1 |
20090319566 | Wald et al. | Dec 2009 | A1 |
20100036895 | Boyd et al. | Feb 2010 | A1 |
20100088317 | Bone et al. | Apr 2010 | A1 |
20100161557 | Anderson et al. | Jun 2010 | A1 |
20100179959 | Shoens | Jul 2010 | A1 |
20100217948 | Mason et al. | Aug 2010 | A1 |
20100241668 | Susanto et al. | Sep 2010 | A1 |
20100287512 | Gan et al. | Nov 2010 | A1 |
20110039622 | Levenson | Feb 2011 | A1 |
20110066668 | Guarraci | Mar 2011 | A1 |
20110125799 | Kandasamy et al. | May 2011 | A1 |
20110125973 | Lev et al. | May 2011 | A1 |
20110161381 | Wang et al. | Jun 2011 | A1 |
20110196833 | Drobychev et al. | Aug 2011 | A1 |
20110202925 | Banerjee et al. | Aug 2011 | A1 |
20110246724 | Marathe et al. | Oct 2011 | A1 |
20120036463 | Krakovsky et al. | Feb 2012 | A1 |
20120066179 | Saika | Mar 2012 | A1 |
20120096059 | Shimizu et al. | Apr 2012 | A1 |
20120136843 | Bone et al. | May 2012 | A1 |
20120166478 | Das et al. | Jun 2012 | A1 |
20120204060 | Swift et al. | Aug 2012 | A1 |
20120317079 | Shoens et al. | Dec 2012 | A1 |
20130019072 | Strasser | Jan 2013 | A1 |
20130091168 | Bhave et al. | Apr 2013 | A1 |
20130191355 | Bone et al. | Jul 2013 | A1 |
20130227236 | Flynn et al. | Aug 2013 | A1 |
20130311454 | Ezzat | Nov 2013 | A1 |
20130318194 | Timbs | Nov 2013 | A1 |
20140006354 | Parkinson et al. | Jan 2014 | A1 |
20140040199 | Golab et al. | Feb 2014 | A1 |
20140040693 | Kim et al. | Feb 2014 | A1 |
20140095249 | Tarakad et al. | Apr 2014 | A1 |
20140101389 | Nellans et al. | Apr 2014 | A1 |
20140156956 | Ezra | Jun 2014 | A1 |
20140181441 | Kottomtharayil et al. | Jun 2014 | A1 |
20140258609 | Cui et al. | Sep 2014 | A1 |
20140280485 | Hummaida et al. | Sep 2014 | A1 |
20140281307 | Peterson et al. | Sep 2014 | A1 |
20140281411 | Abdallah | Sep 2014 | A1 |
20140344222 | Morris et al. | Nov 2014 | A1 |
20140372384 | Long et al. | Dec 2014 | A1 |
20140372607 | Gladwin et al. | Dec 2014 | A1 |
20140373032 | Merry et al. | Dec 2014 | A1 |
20150067086 | Adriaens et al. | Mar 2015 | A1 |
20150067142 | Renkema | Mar 2015 | A1 |
20150106145 | Hamilton et al. | Apr 2015 | A1 |
20150135331 | Das | May 2015 | A1 |
20150193347 | Kluesing et al. | Jul 2015 | A1 |
20150215405 | Baek et al. | Jul 2015 | A1 |
20150234879 | Baldwin et al. | Aug 2015 | A1 |
20150242263 | Klose | Aug 2015 | A1 |
20150278282 | Sardina et al. | Oct 2015 | A1 |
20160034356 | Aron et al. | Feb 2016 | A1 |
20160139836 | Nallathambi et al. | May 2016 | A1 |
20160224430 | Long et al. | Aug 2016 | A1 |
20160246816 | Abiri et al. | Aug 2016 | A1 |
20160306810 | Ni | Oct 2016 | A1 |
20160314046 | Kumarasamy | Oct 2016 | A1 |
20160335278 | Tabaaloute et al. | Nov 2016 | A1 |
20160357677 | Hooker et al. | Dec 2016 | A1 |
20160359859 | Capone | Dec 2016 | A1 |
20160371297 | Okun et al. | Dec 2016 | A1 |
20160380878 | Bugenhagen et al. | Dec 2016 | A1 |
20170046143 | Kochhar et al. | Feb 2017 | A1 |
20170091046 | Bangalore et al. | Mar 2017 | A1 |
20170123883 | Hall | May 2017 | A1 |
20170163728 | Chawla et al. | Jun 2017 | A1 |
20170201582 | Zhang et al. | Jul 2017 | A1 |
20170206231 | Binder et al. | Jul 2017 | A1 |
20170286455 | Li et al. | Oct 2017 | A1 |
20170316321 | Whitney et al. | Nov 2017 | A1 |
20170344905 | Hack | Nov 2017 | A1 |
20180040029 | Zeng et al. | Feb 2018 | A1 |
20180288057 | Varadamma | Oct 2018 | A1 |
20180314423 | Gong et al. | Nov 2018 | A1 |
20190095112 | Lingarajappa | Mar 2019 | A1 |
20190163591 | Ouyang et al. | May 2019 | A1 |
20200004977 | Araujo et al. | Jan 2020 | A1 |
Number | Date | Country |
---|---|---|
1498829 | Jan 2005 | EP |
1999044145 | Sep 1999 | WO |
0072201 | Nov 2000 | WO |
Entry |
---|
Office Communication for U.S. Appl. No. 14/595,043 dated Aug. 27, 2019, pp. 1-34. |
International Search Report and Written Opinion for application PCT/US2016038242 dated Oct. 11, 2016, pp. 1-11. |
Office Communication for U.S. Appl. No. 15/957,809 dated Jan. 24, 2019, pp. 1-28. |
Office Communication for U.S. Appl. No. 16/262,756 dated Aug. 5, 2019, pp. 1-48. |
Office Communication for U.S. Appl. No. 16/262,790 dated Aug. 23, 2019, pp. 1-20. |
Office Communication for U.S. Appl. No. 16/262,790 dated Apr. 18, 2019, pp. 1-24. |
Office Communication for U.S. Appl. No. 16/262,756 dated Oct. 25, 2019, pp. 1-6. |
Office Communication for U.S. Appl. No. 16/659,488 dated Dec. 30, 2019, pp. 1-35. |
Office Communication for U.S. Appl. No. 14/595,598 dated Dec. 31, 2019, pp. 1-23. |
Office Communication for U.S. Appl. No. 16/004,208 dated Aug. 27, 2018, pp. 1-11. |
Office Communication for U.S. Appl. No. 16/234,395 dated Aug. 8, 2019, pp. 1-28. |
Office Communication for U.S. Appl. No. 16/234,334 dated Apr. 5, 2019, pp. 1-24. |
Office Communication for U.S. Appl. No. 15/473,051 dated Jun. 30, 2017, pp. 1-22. |
European Search Report for European Application 18155779.4 dated Apr. 17, 2018, pp. 1-15. |
Office Communication for U.S. Appl. No. 16/004,182 dated Aug. 23, 2018, pp. 1-46. |
Office Communication for U.S. Appl. No. 16/004,182 dated Mar. 5, 2019, pp. 1-46. |
Office Communication for U.S. Appl. No. 16/004,182 dated Jul. 3, 2019, pp. 1-50. |
Office Communication for U.S. Appl. No. 15/694,604 dated Jun. 3, 2019, pp. 1-14. |
Office Communication for U.S. Appl. No. 16/004,182 dated May 22, 2019, pp. 1-6. |
Office Communication for U.S. Appl. No. 14/595,043 dated Jun. 7, 2019, pp. 1-29. |
Office Communication for U.S. Appl. No. 15/831,236 dated Mar. 30, 2018, pp. 1-8. |
Office Communication for U.S. Appl. No. 15/831,236 dated Aug. 15, 2018, pp. 1-27. |
Office Communication for U.S. Appl. No. 15/694,604 dated Nov. 20, 2019, pp. 1-24. |
Office Communication for U.S. Appl. No. 14/859,114 dated Nov. 19, 2018, pp. 1-41. |
Office Communication for U.S. Appl. No. 14/859,114 dated Jan. 31, 2019, pp. 1-5. |
Office Communication for U.S. Appl. No. 14/859,114 dated Mar. 7, 2019, pp. 1-39. |
Office Communication for U.S. Appl. No. 14/859,114 dated Jun. 26, 2019, pp. 1-66. |
Office Communication for U.S. Appl. No. 14/859,114 dated Sep. 13, 2019, pp. 1-8. |
Office Communication for U.S. Appl. No. 14/859,114 dated Nov. 26, 2019, pp. 1-43. |
Office Communication for U.S. Appl. No. 15/288,853 dated Sep. 19, 2018, pp. 1-27. |
Chimera, “Value Bars: An Information Visualization and Navigation Tool for Multi-attribute Listings”, CHI '92, Monterey, CA, May 3-7, 1992, pp. 293-294. |
Office Communication for U.S. Appl. No. 15/288,853 dated Mar. 25, 2019, pp. 1-25. |
Cudre-Mauroux, et al., “TrajStore: An Adaptive Storage System for Very Large Trajectory Sets”, ICDE 2010, Long Beach, CA, Mar. 1-6, 2010, pp. 109-120. |
Office Communication for U.S. Appl. No. 16/436,825 dated Jul. 11, 2019, pp. 1-22. |
Office Communication for U.S. Appl. No. 15/474,047 dated Sep. 18, 2017, pp. 1-24. |
Office Communication for U.S. Appl. No. 15/474,047 dated Mar. 9, 2018, pp. 1-12. |
Office Communication for U.S. Appl. No. 15/474,047 dated Jun. 11, 2018, pp. 1-7. |
Office Communication for U.S. Appl. No. 16/752,509 dated Apr. 2, 2020, pp. 1-42. |
Office Communication for U.S. Appl. No. 16/152,277 dated Apr. 3, 2020, pp. 1-10. |
Office Communication for U.S. Appl. No. 15/474,047 dated Aug. 15, 2018, pp. 1-24. |
Office Communication for U.S. Appl. No. 16/434,157 dated Jul. 25, 2019, pp. 1-16. |
Office Communication for U.S. Appl. No. 15/854,447 dated May 6, 2019, pp. 1-31. |
Office Communication for U.S. Appl. No. 16/505,562 dated Aug. 30, 2019, pp. 1-46. |
Extended European Search Report for European Application 17206518.7 dated Apr. 5, 2018, pp. 1-8. |
Karatza et al. Epoch load sharing in a network of workstations, Simulation Symposium, 2001. Proceedings. 34th Annual Apr. 22-26, 2001, Piscataway, NJ, USA, IEEE, Apr. 22, 2001 (Apr. 22, 2001), pp. 36-42, XP010541274, ISBN: 978-0-7695-1092-7. |
Extended European Search Report for European Application 18155779.4 dated Apr. 17, 2018, pp. 1-15. |
Office Communication for U.S. Appl. No. 16/004,182 dated Jan. 7, 2020, pp. 1-54. |
Office Communication for U.S. Appl. No. 16/125,573 dated Nov. 21, 2019, pp. 1-24. |
Office Communication for U.S. Appl. No. 16/226,587 dated Oct. 24, 2019, pp. 1-6. |
Office Communication for U.S. Appl. No. 16/262,790 dated Dec. 12, 2019, pp. 1-23. |
Office Communication for U.S. Appl. No. 16/234,334 dated Jan. 16, 2020, pp. 1-8. |
Office Communication for U.S. Appl. No. 16/262,756 dated Jan. 28, 2020, pp. 1-27. |
Office Communication for U.S. Appl. No. 16/434,157 dated Jan. 29, 2020, pp. 1-12. |
Office Communication for U.S. Appl. No. 16/262,790 dated Feb. 6, 2020, pp. 1-8. |
Office Communication for U.S. Appl. No. 14/859,114 dated Mar. 13, 2020, pp. 1-22. |
Office Communication for U.S. Appl. No. 16/752,451 dated Mar. 12, 2020, pp. 1-14. |
Office Communication for U.S. Appl. No. 16/775,041 dated Mar. 11, 2020, pp. 1-8. |
Office Communication for U.S. Appl. No. 16/779,362 dated Mar. 26, 2020, pp. 1-10. |
Wikipedia clustered file system page from date Jul. 9, 2019, retrieved using the WayBackMachine, From https://web.archive.org/web/20190709083400/https://en.wikipedia.org/wiki/Clustered_file_system (Year: 2019), pp. 1-6. |
Wikipedia raft page from date Jul. 16, 2019, retrieved using the WayBackMachine, from https://web.archive.org/web/20190716115001/https://en.wikipedia.org/wiki/Raft (computer_ science) (Year: 2019), pp. 1-4. |
Office Communication for U.S. Appl. No. 16/228,716 dated Jun. 24, 2019, pp. 1-28. |
Office Communication for U.S. Appl. No. 16/226,587 dated Aug. 5, 2019, pp. 1-54. |
Office Communication for U.S. Appl. No. 16/231,354 dated Jul. 10, 2019, pp. 1-16. |
Office Communication for U.S. Appl. No. 15/967,499 dated Jun. 27, 2018, pp. 1-25. |
Office Communication for U.S. Appl. No. 16/226,587 dated Feb. 25, 2019, pp. 1-65. |
Office Communication for U.S. Appl. No. 16/228,716 dated Feb. 28, 2019, pp. 1-28. |
Office Communication for U.S. Appl. No. 16/231,354 dated Mar. 25, 2019, pp. 1-19. |
Office Communication for U.S. Appl. No. 16/262,756 dated Apr. 2, 2019, pp. 1-40. |
Office Communication for U.S. Appl. No. 16/004,182 dated Mar. 23, 2020, pp. 1-6. |
Office Communication for U.S. Appl. No. 16/234,395 dated Mar. 28, 2019, pp. 1-36. |
Kappes et al. “Dike: Virtualization-aware Access Control for Multitenant Filesystems”, Feb. 18, 2013, pp. 1-6. |
Hitz et al. “Merging NT and UNIX filesystem Permissions”, Proceedings of the 2nd USENIX Windows NT Symposium, Seattle, Washington, Aug. 3-4, 1998, pp. 1-10. |
Office Communication for U.S. Appl. No. 16/234,334 dated Oct. 11, 2019, pp. 1-22. |
Office Communication for U.S. Appl. No. 14/859,114 dated Jul. 24, 2017, pp. 1-49. |
Office Communication for U.S. Appl. No. 14/595,598 dated Dec. 15, 2017, pp. 1-18. |
Office Communication for U.S. Appl. No. 14/658,015 dated Jan. 4, 2018, pp. 1-28. |
Office Communication for U.S. Appl. No. 14/859,114 dated Feb. 21, 2018, pp. 1-25. |
Office Communication for U.S. Appl. No. 14/595,043 dated May 4, 2017, pp. 1-30. |
Office Communication for U.S. Appl. No. 14/595,043 dated May 25, 2018, pp. 1-5. |
Office Communication for U.S. Appl. No. 14/595,043 dated Feb. 23, 2018, pp. 1-16. |
Office Communication for U.S. Appl. No. 14/595,598 dated Feb. 24, 2017, pp. 1-8. |
Office Communication for U.S. Appl. No. 14/658,015 dated Apr. 27, 2017, pp. 1-7. |
Office Communication for U.S. Appl. No. 14/859,114 dated May 11, 2018, pp. 1-5. |
Office Communication for U.S. Appl. No. 14/595,598 dated Apr. 19, 2018, pp. 1-3. |
Office Communication for U.S. Appl. No. 14/859,114 dated Jun. 27, 2018, pp. 1-39. |
Office Communication for U.S. Appl. No. 14/595,043 dated Oct. 5, 2018, pp. 1-17. |
Office Communication for U.S. Appl. No. 14/658,015 dated Jul. 13, 2018, pp. 1-14. |
Office Communication for U.S. Appl. No. 14/595,598 dated Sep. 20, 2018, pp. 1-18. |
Office Communication for U.S. Appl. No. 15/957,809 dated Jun. 28, 2018, pp. 1-33. |
Office Communication for U.S. Appl. No. 14/859,114 dated Jun. 26, 2019, pp. 1-27. |
Office Communication for U.S. Appl. No. 14/595,598 dated Jul. 31, 2019, pp. 1-5. |
Number | Date | Country | |
---|---|---|---|
20190251065 A1 | Aug 2019 | US |
Number | Date | Country | |
---|---|---|---|
61982926 | Apr 2014 | US | |
61982931 | Apr 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 14658015 | Mar 2015 | US |
Child | 16152259 | US |