This disclosure relates generally to systems and methods for data replication, and more specifically to replication infrastructures including mirrored data sites to provide a single consistent view of the file system available from any site.
The approaches described in this section could be pursued but are not necessarily approaches that have been previously conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.
In computing systems, remote replication is a form of data protection that involves copying of data between multiple sites to improve data protection, fault tolerance and to provide disaster recovery. As used herein, a term “site” may refer to physically distinct geographic locations, or it may refer to distinct groupings that require failure handling. For example, protection from earthquakes could mean placing replicas in sites that are not affected by the same fault lines. If the protection is directed against power-related failures, the two sites may be in the same building or perhaps even in the same rack, but each site would have different power sources.
Procedures used for data protection with a single site and procedures used for replication between different sites may differ substantially. Therefore, in conventional systems, two entirely different methodologies may be used.
Furthermore, replication of data objects between sites may suffer from various network and node failures. Such failures need to be detected and recovered from.
This summary is provided to introduce a selection of concepts in a simplified form that are further described below in the Detailed Description. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.
In accordance with various embodiments of the disclosure, a method for replication between mirroring sites is provided. In some embodiments, the method may include replicating a content addressable object store between multiple sites, where each object is addressable by a signature that is derived from the object data. In some embodiments, the method may include replicating a file system that is constructed on top of a content addressable object store.
The mirroring may be bi-directional such that changes to the data at any site are copied to all other sites, and all sites may simultaneously access the data according to the procedures available for the file system.
Additionally, the method may include guaranteeing, across the set of replicated sites, the read and write ordering and locking guarantees that the file system is required to deliver to its clients. In some embodiments, these ordering rules may be defined by various standards or protocols. Examples of such standards and protocols include POSIX, NFS, CIFS, SMB, RESTful, WebDav, and so forth.
Each mirrored site may include one or more nodes, one of which may be elected as a gateway. In some embodiments, gateway nodes may cooperate to elect one site as an arbitrator. Alternatively, the gateways may cooperatively share this responsibility. The arbitrator guarantees that all file system ordering rules are adhered to.
Sites may be added to a mirror or removed from it. An added site may already contain data objects. In some embodiments, these data objects may be replicated using an “initial synchronization” method. The initial synchronization method may be also used whenever sites are reconnected after a disconnection (for example, due to a network failure).
Using the method described herein, data objects of a mirrored site may be accessed, created, modified, or changed by other sites while the mirrored site is disconnected. For this purpose, a data object may be received at one site. Then, the data object may be stored on one or more nodes at that site. Furthermore, the data object may be forwarded to the gateway. For some data objects, the gateway may synchronously replicate the data object and metadata to the mirrored sites. For other data objects the gateway may synchronously send the data object signature and object metadata to the mirrored sites. The data object may be queued for asynchronous transmission. In some embodiments, the metadata may include a site identifier.
When a client at a mirrored site requires access to a data object that has not yet been replicated, that mirrored site sends a request for the data object to the site identified by the object metadata previously transmitted. The requested site may then send the object data to the requesting site and remove it from the queue of objects pending transmission.
In some embodiments, the queue of objects to be asynchronously replicated may be restricted to a maximum count of data objects. Thus, when the queue has reached its maximum, the site may stop accepting new data objects.
To the accomplishment of the foregoing and related ends, the one or more embodiment of the disclosures may comprise the features hereinafter fully described and particularly pointed out in the claims. The following description and the drawings set forth in detail certain illustrative features of the one or more embodiment of the disclosures. These features are indicative, however, of but a few of the various ways in which the principles of various embodiments of the disclosures may be employed, and this description is intended to include all such embodiment of the disclosures and their equivalents.
Embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with exemplary embodiments. These exemplary embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents.
The approaches and principles disclosed herein relate to computer-implemented methods and systems for replication of data objects within a computer network infrastructure. The computer network infrastructure includes a plurality of nodes each having storage resources for storing various data objects and enabling access to them from other nodes. Moreover, the storage resources of a single node may include one or a plurality of hard drives or other memory devices such as RAM (random-access memory), ROM (read-only memory), hard disk drive (HDD), or solid state drive (SSD). Each data object (e.g., a file, a collection of files, or any other entity which can be manipulated by an operating system or an application) may be replicated to each of the nodes in the infrastructure.
Logical connections between devices may form various topologies. The topology may depend on the layout of physical network, a number of devices in the network, and other factors. One conventional topology includes a site where data nodes are connected in a circle and each data node is connected to the neighboring data nodes. Another conventional topology includes a “mesh” topology where every data node is connected to every other data node in the site. Such site can be referred to as a “ring.” Rings may be connected in a mesh, using point-to-point communications channels between each pair of nodes. Data nodes are said to be connected in a mesh when two or more rings are connected together and every pair of nodes in each ring are connected using point-to-point communications channels. Thus, if a ring contains 3 nodes, A, B, and C, there will be 3 network connections: A-B, B-C, and C-A. If there is a mesh with one member ring containing 2 nodes, A, B, and the other containing 3 nodes, D, E, F, then we have the following 10 network connections: A-B, D-E, E-F, F-D, A-D, A-E, A-F, B-D, B-E, and B-F.
In more complex designs, both of the described topologies may be used. For example, data sites may communicate with each other using a mesh connection. With this connection, every node in a data site can replicate data to every node in another data site. Thus, a mesh topology can exist between the data sites. This topology is illustrated by
As shown in
Site A may be associated with a configuration space 140, while site B may be associated with a configuration space 160. Configuration spaces 140 and 160 of the sites A and B in the mirror may be coordinated by a system for replication between data sites 150. The system 150 may coordinate configuration of sites A and B connected in the mesh to provide a single common view of a file system.
Replication of data between mirroring data sites may employ both synchronous and asynchronous data paths. Synchronous data paths may be used for metadata communication, while asynchronous paths may be used to transmit data itself. This approach is illustrated in
In a multi-site infrastructure, one site may control replication and operate a Metadata Operational Processor (MOP). This site may be referred to as a master site and may be used to control metadata and resolve conflicts. Other sites may each operate a MOP proxy. These sites may be referred to as subservient sites.
A site B not hosting MOP 220 may run a MOP proxy 230. The MOP proxy 230 may receive requests from nodes of the site B, just as MOP 220 receives requests from site A. However, by acting as a proxy, MOP proxy 230 may relay requests to MOP 220, in site A, and relay responses back to the nodes initiating the request. The MOP proxy 230 may act as a forwarding agent and relay remote procedure calls (RPC) between nodes of site B and the node running the MOP 220 in the site A.
Site A and site B may be connected using a bidirectional connection between the nodes in each site. This connection may be called a main gateway 210. The main gateway 210 may leverage a distributed messaging protocol for connection and events. The main gateway 210 may operate over a Local Area Network (LAN) or a Wide Area Network (WAN).
Referring now to
This may accommodate the node failover scenario in which a node 2A hosting the MOP 220 (or MOP proxy 230) fails over to another node in the site. The gateway service may follow the MOP 220 in a node failover. This may be performed using a pre-provisioned path to establish an alternate gateway between the mirrored sites. The state of the connections may be used to limit the possible MOP and gateway failover locations.
MOP proxy 230 that is associated with node 2B may migrate to another node of site B (for example, node 4B). This may be a result of a failure of the main gateway 210. Because the main gateway 210 and MOP services are co-located, the main gateway 210 may also migrate to node 4B.
Connection States
Thus, connection states of a site may include awaiting connection 410, which may be initiated by nodes in another site. When connection is established, synchronizing 420 between the sites may start. The synchronizing may continue until either a synchronization error occurs 430 or the synchronization finishes 440. When either of the states 430 or 440 occurs, connection between sites no longer exists, and the site may go to the state 410 again and try to restore connection 410 and continue synchronizing 420 until the synchronization is finished 440.
Initial Synchronization
When sites connect or reconnect, the gateway service enters a phase called initial synchronization. The gateway services in each site may exchange object identifiers of objects known to exist on their respective sites. Object identifiers corresponding to objects unknown to the site may be pulled by the gateway service using a data receive operation, then written to the site using a data transfer operation. These operations may allow the gateway service to perform the initial synchronization of objects with more efficient use of the network link.
In some embodiments, status keys related to initial synchronization may be published in the configuration space, since initial synchronization is a long term operation. In such a way, the progress of the initial synchronization may be monitored.
When a mirror is connected, file system updates made on one site may be relayed to the other site. At the object level, this may be achieved by echoing updates made locally in one site to the other site.
Tier Architecture
Data objects may be replicated between nodes within a site and between the sites. Intra-site and inter-site operations may be performed at different levels or tiers as shown by
Updates may be persisted on site A locally via tier 0510 (or local tier). Correspondingly, intra-site operations, such as operations between nodes 1A and 2A, 1A and 3A, 3A and 4A, and 2A and 4A may be performed at tier 0510.
Updates may then be pushed to site B via tier 1520 (or remote tier). Operations within site B, i.e., 1B and 2B, 1B and 3B, 3B and 4B, and 2B and 4B, may then also be performed at tier 0510.
By associating intra-site and inter-site operations with different tiers, looping of operations may be avoided. For example, a data object write may be replicated to all of the tiers in a list at the originating site. One of those tiers may contain a gateway to another site, which causes the data object to be replicated to that site. Within that site, a new list of tiers to store the data object may be generated, and the originating tier may be eliminated from the list in order to avoid the gateway on this end looping the data object back to the originating end.
In accordance to some embodiments, before a generic update operation completes successfully, it must successfully complete on all tiers.
Since the mirror connection may have slow WAN-like performance with high latencies, an update between sites may start with forming a data reference informing the other site about data objects that will be transferred. For example, an RPC may be handled by the gateway service on the other site, at which point an entry may be made in a proxy object database, called DB_MB. Such an entry may indicate a promise by the other site that an object corresponding to this entry will eventually arrive. The object transfer may be then queued on the node initiating the request for eventual delivery to the other site.
In some example embodiments, a configurable queue length maximum may be enforced. When the limit is reached, an alarm may be triggered, and the original operation will not complete until the queue can be appended.
In other example embodiments, when the queue limit is reached, backpressure may be applied to the clients, such that new writes are not accepted. Via this and similar schemes, the differences between the mirrors may be minimized and bounded in time.
To support a read, an object receive may be performed only after an object is not seen within any previous tier. In order to support a consistent file system view, if the requested object corresponds to an entry in the local proxy object database, then a tier 0 read operation may be performed in the other site via the gateway service node in each site. Meanwhile, the corresponding data object may not get persisted in that site until the write operation is eventually de-queued and processed successfully. Upon successful completion, the corresponding entry in the DB_MB is removed. It is possible that the write operation will fail when the mirror is compromised. This is discussed in more detail below.
The length of this write queue, when combined with the length of the read queue for initial synchronization, may provide information concerning synchronization of two sites. These lengths may be periodically recorded in a mesh status key in the configuration space.
Asynchronous Data Object Write Operations
When write operations within a site, for example, site A as shown by
On receiving the proxy 640, proxy reference database 630 of site B may be updated. Thus, proxy references may be created informing site B about the data objects 620 that will be transferred.
When a data delivery queue 610 allows, the data objects 620 may be compressed and sent to site B. Data objects 620 may be transferred asynchronously. After receiving the data objects 620 in site B, data objects 620 may be decompressed and written to nodes of site B.
After the update operation in all tiers is successfully completed, the proxy object info 640 in proxy reference database 630 may be removed.
Data Object Read Operations
When a read operation for a data object is initiated, the data object may be searched for at tier 0510. If the data object is found at tier 0510, then the read operation is successfully completed. However, if the data object is not written in the site yet, the data object will not be found, so the read operation may be retried to tier 1. This process may repeat until either the object is found or all tiers are exhausted. In the latter case, the object cannot be found so an error is returned.
Referring to
In some embodiments, data objects may be associated with object identifiers. An object identifier may uniquely identify a data object based on the content of the data object. Thus, a data object may be found in any location, at any tier using the object identifier, despite replication policies local to a site that involve dynamically relocating data replicas within a site.
Site Failure Scenarios
In some cases, one of the sites may experience a failure due to various reasons (for example, a power outage). If a remote site fails, a new remote site may be provisioned, and after establishing a connection, an initial synchronization may be initiated. If the site hosting the MOP fails, the system for remote replication may designate a new site to host the MOP.
If site A fails, proxy reference database 730 has proxies, so the system for replication between data sites may perform a rollback. The system may scan the database and roll back to a snapshot that will support the data that was written in site B.
Other actions may include emptying data delivery queue 710, removing proxies from proxy reference database 730, and so forth.
If site A fails and subsequently recovers (e.g., recovering from a temporary power outage), it may be demoted to a site running a MOP proxy. When the connection between the sites is established, an initial synchronization procedure may be initiated. Thus, access to data residing on site A will not be lost.
It will be appreciated by one of ordinary skill in the art that examples of the foregoing modules may be virtual, and instructions said to be executed by a module may, in fact, be retrieved and executed by the system 900. Although various elements may be configured to perform some or all of the various operations described herein, fewer or more elements may be provided and still fall within the scope of various embodiments.
As shown in
In some embodiments, an object identifier may be generated by running a cryptographic hash function over a content associated with the data object. Thereafter, the data object may be found based on the content associated with the data object.
At operation 840, the data object reference may be transmitted to one or more of other mirrored data sites including one or more nodes. Each of these nodes may be interconnected with each node in the other mirrored data sites to form a complete mesh. In some example embodiments, the data object reference may be transmitted to a data object reference database associated with the other mirrored data site. Then, the data object may be queued for transmission to the other mirrored data site at operation 850.
Upon transmission of the data object to the other mirrored data site, the data object may be replicated to one or more nodes of that data site. After completion of replication of the data object to the mirrored data site, the data object reference may be discarded.
In some embodiments, replication of the data object to the nodes within a mirrored data site may be performed at an intra-site operation tier, whereas transmitting the data object reference and the data object between mirrored data sites may be performed at an inter-site operation tier. Operations at both operation tiers may be performed using the same data logic.
Additionally, the method 800 may optionally comprise synchronizing data between mirrored data sites. The synchronizing may include comparing data object references and data objects associated with the mirrored data site internally to data object references and data objects associated with one or more of the other mirrored data sites. Delivery of the data objects corresponding to the object references may be requested.
In some embodiments, the method 800 may optionally comprise receiving a request for the data object at the other mirrored data site. If that site does not have the requested data object, it may be determined based on the data object reference associated with the other mirrored data site. In this case, the mirrored data site may be requested to serve the data object at a higher priority.
The data object may be then queued for a transmission to one or more of the other mirrored data sites. Upon the transmission of the data object to the one or more of the other mirrored data sites, the data object may be replicated to the nodes of the one or more of the other mirrored data sites and the data object reference may be discarded.
The example computer system 1000 includes a processor or multiple processors 1002, a hard disk drive 1004, a main memory 1006 and a static memory 1008, which communicate with each other via a bus 1010. The computer system 1000 may also include a network interface device 1012, and coprocessors dedicated for data compression and object identifier cryptographic calculation. The hard disk drive 1004 may include a computer-readable medium 1020, which stores one or more sets of instructions 1022 embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 1022 can also reside, completely or at least partially, within the main memory 1006 and/or within the processors 1002 during execution thereof by the computer system 1000. The main memory 1006 and the processors 1002 also constitute machine-readable media such as, for example, an HDD or SSD.
While the computer-readable medium 1020 is shown in an exemplary embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media. Such media can also include, without limitation, hard disks, floppy disks, NAND or NOR flash memory, digital video disks, RAM, ROM, HDD, SSD, and the like.
The exemplary embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware. The computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems. Although not limited thereto, computer software programs for implementing the present method can be written in any number of suitable programming languages such as, for example, C, C++, C# or other compilers, assemblers, interpreters or other computer languages or platforms.
Thus, computer-implemented methods and systems for replication of data between mirrored data sites are described. Although embodiments have been described with reference to specific exemplary embodiments, it will be evident that various modifications and changes can be made to these exemplary embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
Number | Name | Date | Kind |
---|---|---|---|
4656604 | van Loon | Apr 1987 | A |
4660130 | Bartley et al. | Apr 1987 | A |
5420999 | Mundy | May 1995 | A |
5561778 | Fecteau et al. | Oct 1996 | A |
5812793 | Shakib | Sep 1998 | A |
5950205 | Aviani, Jr. | Sep 1999 | A |
6098079 | Howard | Aug 2000 | A |
6154747 | Hunt | Nov 2000 | A |
6167437 | Stevens et al. | Dec 2000 | A |
6314435 | Wollrath et al. | Nov 2001 | B1 |
6356916 | Yamatari et al. | Mar 2002 | B1 |
6480950 | Lyubashevskiy et al. | Nov 2002 | B1 |
6493721 | Getchius | Dec 2002 | B1 |
6772162 | Waldo et al. | Aug 2004 | B2 |
6839823 | See et al. | Jan 2005 | B1 |
7043494 | Joshi et al. | May 2006 | B1 |
7177980 | Milillo et al. | Feb 2007 | B2 |
7197622 | Torkelsson et al. | Mar 2007 | B2 |
7266555 | Coates et al. | Sep 2007 | B1 |
7293140 | Kano | Nov 2007 | B2 |
7392421 | Bloomstein | Jun 2008 | B1 |
7403961 | Deepak et al. | Jul 2008 | B1 |
7454592 | Shah et al. | Nov 2008 | B1 |
7509360 | Wollrath et al. | Mar 2009 | B2 |
7539836 | Klinkner | May 2009 | B1 |
7653668 | Shelat | Jan 2010 | B1 |
7685109 | Ransil et al. | Mar 2010 | B1 |
7725437 | Kirshenbaum et al. | May 2010 | B2 |
7827218 | Mittal | Nov 2010 | B1 |
7895666 | Eshghi et al. | Feb 2011 | B1 |
7990979 | Lu et al. | Aug 2011 | B2 |
8019882 | Rao et al. | Sep 2011 | B2 |
8099605 | Billsrom et al. | Jan 2012 | B1 |
8132168 | Wires et al. | Mar 2012 | B2 |
8239584 | Rabe et al. | Aug 2012 | B1 |
8364887 | Wong et al. | Jan 2013 | B2 |
8407438 | Ranade | Mar 2013 | B1 |
8447733 | Sudhakar | May 2013 | B2 |
8572290 | Mukhopadhyay et al. | Oct 2013 | B1 |
8868926 | Hunt et al. | Oct 2014 | B2 |
9009202 | Patterson | Apr 2015 | B2 |
9043567 | Modukuri et al. | May 2015 | B1 |
20020069340 | Tindal et al. | Jun 2002 | A1 |
20020087590 | Bacon et al. | Jul 2002 | A1 |
20030028514 | Lord et al. | Feb 2003 | A1 |
20030028585 | Yeager et al. | Feb 2003 | A1 |
20030056139 | Murray et al. | Mar 2003 | A1 |
20030072259 | Mor | Apr 2003 | A1 |
20030101173 | Lanzatella et al. | May 2003 | A1 |
20030115408 | Milillo et al. | Jun 2003 | A1 |
20040093361 | Therrien et al. | May 2004 | A1 |
20040111610 | Slick et al. | Jun 2004 | A1 |
20040158588 | Pruet, III | Aug 2004 | A1 |
20040167898 | Margolus et al. | Aug 2004 | A1 |
20050071335 | Kadatch | Mar 2005 | A1 |
20050080928 | Beverly et al. | Apr 2005 | A1 |
20050081041 | Hwang | Apr 2005 | A1 |
20050083759 | Wong et al. | Apr 2005 | A1 |
20050138271 | Bernstein et al. | Jun 2005 | A1 |
20050160170 | Schreter | Jul 2005 | A1 |
20050256972 | Cochran | Nov 2005 | A1 |
20060036648 | Frey | Feb 2006 | A1 |
20060039371 | Castro et al. | Feb 2006 | A1 |
20060083247 | Mehta | Apr 2006 | A1 |
20060156396 | Hochfield et al. | Jul 2006 | A1 |
20060168154 | Zhang | Jul 2006 | A1 |
20060271540 | Williams | Nov 2006 | A1 |
20060271604 | Shoens | Nov 2006 | A1 |
20070005746 | Roe et al. | Jan 2007 | A1 |
20070130232 | Therrien et al. | Jun 2007 | A1 |
20070203960 | Guo | Aug 2007 | A1 |
20070230368 | Shi et al. | Oct 2007 | A1 |
20070233828 | Gilbert | Oct 2007 | A1 |
20070271303 | Menendez et al. | Nov 2007 | A1 |
20070276838 | Abushanab et al. | Nov 2007 | A1 |
20070276843 | Lillibridge et al. | Nov 2007 | A1 |
20080005624 | Kakivaya | Jan 2008 | A1 |
20080016507 | Thomas et al. | Jan 2008 | A1 |
20080052446 | Lasser et al. | Feb 2008 | A1 |
20080126434 | Uysal et al. | May 2008 | A1 |
20080133893 | Glew | Jun 2008 | A1 |
20080147872 | Regnier | Jun 2008 | A1 |
20080170550 | Liu et al. | Jul 2008 | A1 |
20080183973 | Aguilera et al. | Jul 2008 | A1 |
20080243879 | Gokhale et al. | Oct 2008 | A1 |
20080243938 | Kottomtharayil et al. | Oct 2008 | A1 |
20080244199 | Nakamura et al. | Oct 2008 | A1 |
20080292281 | Pecqueur et al. | Nov 2008 | A1 |
20090049240 | Oe et al. | Feb 2009 | A1 |
20090100212 | Boyd et al. | Apr 2009 | A1 |
20090172139 | Wong et al. | Jul 2009 | A1 |
20090198927 | Bondurant et al. | Aug 2009 | A1 |
20090199041 | Fukui et al. | Aug 2009 | A1 |
20090282125 | Jeide | Nov 2009 | A1 |
20090307292 | Li et al. | Dec 2009 | A1 |
20090327312 | Kakivaya et al. | Dec 2009 | A1 |
20100023941 | Iwamatsu et al. | Jan 2010 | A1 |
20100031000 | Flynn et al. | Feb 2010 | A1 |
20100036862 | Das et al. | Feb 2010 | A1 |
20100114336 | Konieczny et al. | May 2010 | A1 |
20100114905 | Slavik et al. | May 2010 | A1 |
20100122330 | McMillan et al. | May 2010 | A1 |
20100161817 | Xiao et al. | Jun 2010 | A1 |
20100172180 | Paley et al. | Jul 2010 | A1 |
20100191783 | Mason et al. | Jul 2010 | A1 |
20100217953 | Beaman et al. | Aug 2010 | A1 |
20100228798 | Kodama et al. | Sep 2010 | A1 |
20100262797 | Rosikiewicz et al. | Oct 2010 | A1 |
20100318645 | Hoole et al. | Dec 2010 | A1 |
20100332456 | Prahlad et al. | Dec 2010 | A1 |
20110026439 | Rollins | Feb 2011 | A1 |
20110029711 | Dhuse et al. | Feb 2011 | A1 |
20110034176 | Lord et al. | Feb 2011 | A1 |
20110060918 | Troncoso Pastoriza et al. | Mar 2011 | A1 |
20110106795 | Maim | May 2011 | A1 |
20110138123 | Gurajada et al. | Jun 2011 | A1 |
20110213754 | Bindal et al. | Sep 2011 | A1 |
20110231374 | Jain et al. | Sep 2011 | A1 |
20110231524 | Lin et al. | Sep 2011 | A1 |
20110264712 | Ylonen | Oct 2011 | A1 |
20110264989 | Resch et al. | Oct 2011 | A1 |
20110271007 | Wang et al. | Nov 2011 | A1 |
20120011337 | Aizman | Jan 2012 | A1 |
20120030260 | Lu et al. | Feb 2012 | A1 |
20120030408 | Flynn et al. | Feb 2012 | A1 |
20120047181 | Baudel | Feb 2012 | A1 |
20120060072 | Simitci et al. | Mar 2012 | A1 |
20120078915 | Darcy | Mar 2012 | A1 |
20120096217 | Son et al. | Apr 2012 | A1 |
20120147937 | Goss et al. | Jun 2012 | A1 |
20120173790 | Hetzler et al. | Jul 2012 | A1 |
20120179808 | Bergkvist et al. | Jul 2012 | A1 |
20120179820 | Ringdahl et al. | Jul 2012 | A1 |
20120185555 | Regni et al. | Jul 2012 | A1 |
20120210095 | Nellans et al. | Aug 2012 | A1 |
20120233251 | Holt et al. | Sep 2012 | A1 |
20120278511 | Alatorre et al. | Nov 2012 | A1 |
20120290535 | Patel et al. | Nov 2012 | A1 |
20120290629 | Beaverson et al. | Nov 2012 | A1 |
20120310892 | Dam | Dec 2012 | A1 |
20120323850 | Hildebrand | Dec 2012 | A1 |
20120331528 | Fu et al. | Dec 2012 | A1 |
20130013571 | Sorenson, III et al. | Jan 2013 | A1 |
20130041931 | Brand | Feb 2013 | A1 |
20130054924 | Dudgeon et al. | Feb 2013 | A1 |
20130067270 | Lee et al. | Mar 2013 | A1 |
20130073821 | Flynn et al. | Mar 2013 | A1 |
20130086004 | Chao et al. | Apr 2013 | A1 |
20130091180 | Vicat-Blanc-Primet et al. | Apr 2013 | A1 |
20130162160 | Ganton et al. | Jun 2013 | A1 |
20130166818 | Sela | Jun 2013 | A1 |
20130185508 | Talagala et al. | Jul 2013 | A1 |
20130232313 | Patel | Sep 2013 | A1 |
20130235192 | Quinn et al. | Sep 2013 | A1 |
20130246589 | Klemba et al. | Sep 2013 | A1 |
20130262638 | Kumarasamy et al. | Oct 2013 | A1 |
20130263151 | Li et al. | Oct 2013 | A1 |
20130268644 | Hardin et al. | Oct 2013 | A1 |
20130268770 | Hunt et al. | Oct 2013 | A1 |
20130282798 | McCarthy et al. | Oct 2013 | A1 |
20130288668 | Pragada et al. | Oct 2013 | A1 |
20130311574 | Lal | Nov 2013 | A1 |
20130346591 | Carroll et al. | Dec 2013 | A1 |
20130346839 | Dinha | Dec 2013 | A1 |
20140006580 | Raghu | Jan 2014 | A1 |
20140007178 | Gillum et al. | Jan 2014 | A1 |
20140019573 | Swift | Jan 2014 | A1 |
20140059405 | Syu et al. | Feb 2014 | A1 |
20140143206 | Pittelko | May 2014 | A1 |
20140297604 | Brand | Oct 2014 | A1 |
20140317065 | Barrus | Oct 2014 | A1 |
20140324945 | Novak | Oct 2014 | A1 |
20140335480 | Asenjo et al. | Nov 2014 | A1 |
20140351419 | Hunt et al. | Nov 2014 | A1 |
20140372490 | Barrus et al. | Dec 2014 | A1 |
20140379671 | Barrus et al. | Dec 2014 | A1 |
20150012763 | Cohen et al. | Jan 2015 | A1 |
20150066524 | Fairbrothers et al. | Mar 2015 | A1 |
20150081964 | Kihara et al. | Mar 2015 | A1 |
20150106335 | Hunt et al. | Apr 2015 | A1 |
20150106579 | Barrus | Apr 2015 | A1 |
20150172114 | Tarlano et al. | Jun 2015 | A1 |
20150220578 | Hunt et al. | Aug 2015 | A1 |
20150222616 | Tarlano et al. | Aug 2015 | A1 |
Number | Date | Country |
---|---|---|
1285354 | Feb 2003 | EP |
2575379 | Apr 2013 | EP |
2834749 | Feb 2015 | EP |
2834943 | Feb 2015 | EP |
2989549 | Mar 2016 | EP |
3000205 | Mar 2016 | EP |
3000289 | Mar 2016 | EP |
3008647 | Apr 2016 | EP |
3011428 | Apr 2016 | EP |
3019960 | May 2016 | EP |
3020259 | May 2016 | EP |
3055794 | Aug 2016 | EP |
3058466 | Aug 2016 | EP |
2004252663 | Sep 2004 | JP |
2008533570 | Aug 2008 | JP |
2010146067 | Jul 2010 | JP |
2011095976 | May 2011 | JP |
2012048424 | Mar 2012 | JP |
WO2013152357 | Oct 2013 | WO |
WO2013152358 | Oct 2013 | WO |
WO2014176264 | Oct 2014 | WO |
WO2014190093 | Nov 2014 | WO |
WO2014201270 | Dec 2014 | WO |
WO2014205286 | Dec 2014 | WO |
WO2015006371 | Jan 2015 | WO |
WO2015054664 | Apr 2015 | WO |
WO2015057576 | Apr 2015 | WO |
WO2015088761 | Jun 2015 | WO |
WO2015116863 | Aug 2015 | WO |
WO2015120071 | Aug 2015 | WO |
Entry |
---|
International Search Report dated Apr. 2, 2015 Application No. PCT/US2014/045822. |
International Sesarch Report dated May 14, 2015 Application No. PCT/US2015/013611. |
International Sesarch Report dated May 15, 2015 Application No. PCT/US2015/014492. |
Invitation pursuant to Rule 63(1) dated May 19, 2015 Application No. 13772293.0. |
International Search Report dated Aug. 6, 2013 Application No. PCT/US2013/035675. |
Huck et al. Architectural Support for Translation Table Management in Large Address Space Machines. ISCA '93 Proceedings of the 20th Annual International Symposium on Computer Architecture, vol. 21, No. 2. May 1993. pp. 39-50. |
International Search Report dated Aug. 2, 2013 Application No. PCT/US2013/035673. |
International Search Report dated Sep. 10, 2014 Application No. PCT/US2014/035008. |
Askitis, Nikolas et al., “HAT-trie: A Cache-conscious Trie-based Data Structure for Strings”. |
International Search Report dated Sep. 24, 2014 Application No. PCT/US2014/039036. |
International Search Report dated Oct. 22, 2014 Application No. PCT/US2014/043283. |
International Search Report dated Nov. 7, 2014 Application No. PCT/US2014/042155. |
International Search Report dated Jan. 1, 2015 Application No. PCT/US2014/060176. |
International Search Report dated Feb. 24, 2015 Application No. PCT/US2014/060280. |
International Search Report dated Mar. 4, 2015 Application No. PCT/US2014/067110. |
Office Action, dated Nov. 5, 2013, U.S. Appl. No. 13/441,715, filed Apr. 6, 2012. |
Notice of Allowance, dated Mar. 27, 2014, U.S. Appl. No. 13/441,715, filed Apr. 6, 2012. |
Office Action, dated Nov. 13, 2013, U.S. Appl. No. 13/441,592, filed Apr. 6, 2012. |
Office Action, dated May 19, 2014, U.S. Appl. No. 13/441,592, filed Apr. 6, 2012. |
Final Office Action, dated Nov. 20, 2014, U.S. Appl. No. 13/441,592, filed Apr. 6, 2012. |
Advisory Action, dated Feb. 19, 2015, U.S. Appl. No. 13/441,592, filed Apr. 6, 2012. |
Extended European Search Report dated Aug. 4, 2015 Application No. 13771965.4. |
Dabek et al. “Wide-area cooperative storage with CFS”, Proceedings of the ACM Symposium on Operating Systems Principles, Oct. 1 , 2001. pp. 202-215. |
Stoica et al. “Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications”, Computer Communication Review, ACM, New York, NY, US, vol. 31, No. 4, Oct. 1 , 2001. pp. 149-160. |
Extended European Search Report dated Aug. 20, 2015 Application No. 13772293.0. |
Office Action dated Mar. 15, 2016 in Japanese Patent Application No. 2015-504769 filed Apr. 8, 2013. |
Joao, Jose et al., “Flexible Reference-Counting-Based Hardware Acceleration for Garbage Collection,” Jun. 2009, ISCA '09: Proceedings of the 36th annual internaltional symposium on Computer Architecture, pp. 418-428. |
Office Action dated Mar. 29, 2016 in Japanese Patent Application No. 2015-504768 filed Apr. 8, 2013, pp. 1-16. |
Notice of Allowance dated Jul. 26, 2016 for Japanese Patent Application No. 2015-504768 filed Apr. 8, 2013, pp. 1-4. |
Office Action, dated May 17, 2016, U.S. Appl. No. 14/303,329, filed Jun. 12, 2014. |
Final Office Action, dated Jun. 1, 2016, U.S. Appl. No. 14/284,351, filed May 21, 2014. |
Final Office Action, dated Jun. 1, 2016, U.S. Appl. No. 14/171,651, filed Feb. 3, 2014. |
Notice of Allowance, dated Jul. 14, 2016, U.S. Appl. No. 14/303,329, filed Jun. 12, 2014. |
Non-Final Office Action, dated Jul. 25, 2016, U.S. Appl. No. 14/309,796, filed Jun. 19, 2014. |
Final Office Action, dated Aug. 9, 2016, U.S. Appl. No. 14/105,099, filed Dec. 12, 2013. |
Non-Final Office Action, dated Aug. 23, 2016, U.S. Appl. No. 14/055,662, filed Oct. 16, 2013. |
Notice of Allowance, dated Aug. 24, 2016, U.S. Appl. No. 14/257,905, filed Apr. 21, 2014. |
Number | Date | Country | |
---|---|---|---|
20150019491 A1 | Jan 2015 | US |