The present disclosure relates to a method for writing data in a data storage system comprising a plurality of data storage nodes, the method being employed in a server in the data storage system. The disclosure further relates to a server capable of carrying out the method.
Such a method is disclosed e.g. in U.S. 2005/0246393, A1. This method is disclosed for a system that uses a plurality of storage centres at geographically disparate locations. Distributed object storage managers are included to maintain information regarding stored data.
One problem associated with such a system is how to accomplish simple and yet robust and reliable writing as well as maintenance of data.
One object of the present disclosure is therefore to realise robust writing of data in a distributed storage system.
The object is also achieved by means of a method for writing data to a data storage system of the initially mentioned kind, which is accomplished in a server running an application which accesses data in the data storage system. The method comprises: sending a multicast storage query to a plurality of storage nodes, receiving a plurality of responses from a subset of said storage nodes the responses including storage node information respectively relating to each storage node, selecting at least two storage nodes in the subset, based on said responses. The selecting includes determining, based on an algorithm, for each storage node in the subset, a probability factor which is based on its storage node information, and randomly selecting said at least two storage nodes, wherein the probability of a storage node being selected depends on its probability factor. The method further involves sending data and a data identifier, corresponding to the data, to the selected storage nodes.
This method accomplishes robust writing of data, since even if storage nodes are selected depending on their temporary aptitude, information will still be spread to a certain extent over the system even during a short time frame. This means that maintenance of the storage system will be less demanding, since the correlation of which storage nodes carry the same information can be reduced to some extent. This means that a replication process which may be carried out when a storage node malfunctions may be carried out by a greater number of other storage nodes, and consequently much quicker. Additionally, the risk of overloading storage nodes with high rank during intensive writing operations is reduced, as more storage nodes is used for writing and fewer are idle.
The storage node information may include geographic data relating to the geographic position of each storage node, such as the latitude, longitude and altitude thereof. This allows the server to spread the information geographically, within a room, a building, a country, or even the world.
It is possible to let the randomly selecting of storage nodes be carried out for storage nodes in the subset fulfilling a primary criteria based on geographic separation, as this is an important feature for redundancy.
The storage node information may include system age and/or system load for the storage node in question.
The multicast storage query may include a data identifier, identifying the data to be stored.
At least three nodes may be selected, and a list of storage nodes successfully storing the data may be sent to the selected storage nodes.
The randomly selecting of the storage nodes may be carried out for a fraction of the nodes in the subset, which includes storage nodes with the highest probability factors. Thereby the least suitable storage nodes are excluded, providing a selection of more reliable storage nodes while maintaining the random distribution of the information to be written.
The disclosure further relates to a server, for carrying out writing of data, corresponding to the method. The server then generally comprises means for carrying out the actions of the method.
The present disclosure is related to a distributed data storage system comprising a plurality of storage nodes. The structure of the system and the context in which it is used is outlined in
A user computer 1 accesses, via the Internet 3, an application 5 running on a server 7. The user context, as illustrated here, is therefore a regular client-server configuration, which is well known per se. However, it should be noted that the data storage system to be disclosed may be useful also in other configurations.
In the illustrated case, two applications 5, 9 run on the server 7. Of course however, this number of applications may be different. Each application has an API (Application Programming Interface) 11 which provides an interface in relation to the distributed data storage system 13 and supports requests, typically write and read requests, from the applications running on the server. From an application's point of view, reading or writing information from/to the data storage system 13 need not appear different from using any other type of storage solution, for instance a file server or simply a hard drive.
Each API 11 communicates with storage nodes 15 in the data storage system 13, and the storage nodes communicate with each other. These communications are based on TCP (Transmission Control Protocol) and UDP (User Datagram Protocol). These concepts are well known to the skilled person, and are not explained further.
It should be noted that different APIs 11 on the same server 7 may access different sets of storage nodes 15. It should further be noted that there may exist more than one server 7 which accesses each storage node 15. This, however does not to any greater extent affect the way in which the storage nodes operate, as will be described later.
The components of the distributed data storage system are the storage nodes 15 and the APIs 11, in the server 7 which access the storage nodes 15. The present disclosure therefore relates to methods carried out in the server 7 and in the storage nodes 15. Those methods will primarily be embodied as software implementations which are run on the server and the storage nodes, respectively, and are together determining for the operation and the properties of the overall distributed data storage system.
The storage node 15 may typically be embodied by a file server which is provided with a number of functional blocks. The storage node may thus comprise a storage medium 17, which typically comprises of a number of hard drives, optionally configured as a RAID (Redundant Array of Independent Disk) system. Other types of storage media are however conceivable as well.
The storage node 15 may further include a directory 19, which comprises lists of data entity/storage node relations as a host list, as will be discussed later.
In addition to the host list, each storage node further contains a node list including the IP addresses of all storage nodes in its set or group of storage nodes. The number of storage nodes in a group may vary from a few to hundreds of storage nodes. The node list may further have a version number.
Additionally, the storage node 15 may include a replication block 21 and a cluster monitor block 23. The replication block 21 includes a storage node API 25, and is configured to execute functions for identifying the need for and carrying out a replication process, as will be described in detail later. The storage node API 25 of the replication block 21 may contain code that to a great extent corresponds to the code of the server's 7 storage node API 11, as the replication process comprises actions that correspond to a great extent to the actions carried out by the server 7 during reading and writing operations to be described. For instance, the writing operation carried out during replication corresponds to a great extent to the writing operation carried out by the server 7. The cluster monitor block 23 is configured to carry out monitoring of other storage nodes in the data storage system 13, as will be described in more detail later.
The storage nodes 15 of the distributed data storage system can be considered to exist in the same hierarchical level. There is no need to appoint any master storage node that is responsible for maintaining a directory of stored data entities and monitoring data consistency, etc. Instead, all storage nodes 15 can be considered equal, and may, at times, carry out data management operations vis-à-vis other storage nodes in the system. This equality ensures that the system is robust. In case of a storage node malfunction other nodes in the system will cover up the malfunctioning node and ensure reliable data storage.
The operation of the system will be described in the following order: reading of data, writing of data, and data maintenance. Even though these methods work very well together, it should be noted that they may in principle also be carried out independently of each other. That is, for instance the data reading method may provide excellent properties even if the data writing method of the present disclosure is not used, and vice versa.
The reading method is now described with reference to
The reading, as well as other functions in the system, utilise multicast communication to communicate simultaneously with a plurality of storage nodes. By a multicast or IP multicast is here meant a point-to-multipoint communication which is accomplished by sending a message to an IP address which is reserved for multicast applications.
The reading, as well as other functions in the system, utilise multicast communication to communicate simultaneously with a plurality of storage nodes. By a multicast or IP multicast is here meant a point-to-multipoint communication which is accomplished by sending a message to an IP address which is reserved for multicast applications.
In principle, only one server may be registered as a subscriber to a multicast address, in which case a point-to-point, communication is achieved. However, in the context of this disclosure, such a communication is nevertheless considered a multicast communication since a multicast scheme is employed.
Unicast communication is also employed referring to a communication with a single recipient.
With reference to
The storage nodes scan themselves for data corresponding to the identifier. If such data is found, a storage node sends a response, which is received 33 by the server 7, cf.
Based on the responses, the server selects 35 one or more storage nodes from which data is to be retrieved, and sends 37 a unicast request for data to that/those storage nodes, cf.
In response to the request for data, the storage node/nodes send the relevant data by unicast to the server which receives 39 the data. In the illustrated case, only one storage node is selected. While this is sufficient, it is possible to select more than one storage node in order to receive two sets of data which makes a consistency check possible. If the transfer of data fails, the server may select another storage node for retrieval.
The selection of storage nodes may be based on an algorithm that take several factors into account in order to achieve a good overall system performance. Typically, the storage node having the latest data version and the lowest load will be selected although other concepts are fully conceivable.
Optionally, the operation may be concluded by server sending a list to all storage nodes involved, indicating which nodes contains the data and with which version. Based on this information, the storage nodes may themselves maintain the data properly by the replication process to be described.
With reference to
In any case, at least a subset of the storage nodes will provide responses by unicast transmission to the server 7. Typically, storage nodes having a predetermined minimum free disk space will answer to the query. The server 7 receives 43 the responses which comprise storage node information relating to properties of each storage node, such as geographic data relating to the geographic position of each server. For instance, as indicated in
Alternatively, or in addition to the geographic data, further information related to storage node properties may be provided that serves as an input to a storage node selection process. In the illustrated example, the amount of free space in each storage node is provided together with an indication of the storage node's system age and an indication of the load that the storage node currently experiences.
Based on the received responses, the server selects 45 at least two, in a typical embodiment three, storage nodes in the subset for storing the data. The selection of storage nodes is carried out by means of an algorithm that takes different data into account. The selection may be carried out in order to achieve some kind of geographical diversity. At least it could preferably be avoided that only file servers in the same rack are selected as storage nodes. Typically, a great geographical diversity may be achieved, even selecting storage nodes on different continents. In addition to the geographical diversity, other parameters may be included in the selection algorithm. It is advantageous to have a randomized feature in the selection process as will be disclosed below.
Typically, the selection may begin by selecting a number of storage nodes that are sufficiently separated geographically. This may be carried out in a number of ways. There may for instance be an algorithm that identifies a number of storage node groups, or storage nodes may have group numbers, such that one storage node in each group easily can be picked.
The selection may then include calculating, based on each node's storage node information (system age, system load, etc.) a probability factor which corresponds to a storage node aptitude score. A younger system for instance, which is less likely to malfunction, gets a higher score. The probability factor may thus be calculated as a scalar product of two vectors where one contains the storage node information parameters (or as applicable their inverses) and the other contains corresponding weighting parameters.
Another factor that may be taken into account is the status of the storage node's disk or disks.
A file to be stored may be predestinate to a specific disk if the storage node has more than one disk. This may be determined by the file's UUID. For instance, if a storage node has 16 hard drives numbered hexadecimally from 0 to F, the disk to be used for a specific file can be determined by the first four bits of the UUID.
Thus, when a storage node receives a storage query, including the file's UUID, it can check the status of the relevant disk, and return this status in the response to the server 7.
The status may typically include the disk-queue, i.e. the number of tasks that the storage node's operative system has sent to the hard drive in question and that has not yet been carried out. This factor is very determining for how quickly the write operation can be carried out.
Another disk status parameter that can be of interest is whether the hard drive in question is sleeping or not. If the disk is sleeping (i.e. does not rotate) it may be efficient to select another storage node to save energy.
In any case, disk parameters can be used in the storage node selection process.
The selection may then comprise randomly selecting storage nodes, where the probability of a specific storage node being selected depends on its probability factor. Typically, if a first server has a twice as high probability factor as a second server, the first server has a twice as high probability of being selected.
It is possible to remove a percentage of the storage nodes with the lowest probability factors before carrying out the random selection, such that this selection is carried out for a fraction of the nodes in the subset, which fraction includes storage nodes with the highest probability factors. This is particularly useful if there are a lot of available storage nodes which may render the selection algorithm calculation time consuming.
Needless to say, the selection process can be carried out in a different way. For instance, it is possible to first calculate the probability factor for all storage nodes in the responding subset and carry out the randomized selection. When this is done, it may be checked that the resulting geographical diversity is sufficient, and, if it is not sufficient, repeat the selection with one of the two closest selected storage nodes excluded from the subset. Making a first selection based on geographic diversity, e.g. picking one storage node in each group for the subsequent selection based on the other parameters, is particularly useful, again, in cases where there are a lot of available storage nodes. In those cases a good selection will still be made without performing calculations with parameters of all available storage nodes.
The selection process for a file to be stored can be carried out based on responses received as the result of a multicast query carried out for that file. However, it would also be possible to instead use responses recently received as the result of a multicast query issued in relation to the storing of another file. As a further alternative, the server can regularly issue general multicast queries “what is your status” to the storage nodes, and the selection may be based on the responses then received. Thus, it may not be necessary to carry out a multicast query for every single file to be stored.
When the storage nodes have been selected, the data to be stored and a corresponding data identifier is sent to each selected node, typically using a unicast transmission.
Optionally, the operation may be concluded by each storage node, which has successfully carried out the writing operation, sending an acknowledgement to the server. The server then sends a list to all storage nodes involved indicating which nodes have successfully written the data and which have not. Based on this information, the storage nodes may themselves maintain the data properly by the replication process to be described. For instance if one storage node's writing failed, there exists a need to replicate the file to one more storage node in order to achieve the desired number of storing storage nodes for that file.
The data writing method in itself allows an API in a server 7 to store data in a very robust way, as excellent geographic diversity may be provided.
In addition to the writing and reading operations, the API in the server 7 may carry out operations that delete files and update files. These processes will be described in connection with the data maintenance process below.
The aim of the data maintenance process is to make sure that a reasonable number of non-malfunctioning storage nodes each store the latest version of each file. Additionally, it may provide the function that no deleted files are stored at any storage node. The maintenance is carried out by the storage nodes themselves. There is thus no need for a dedicated “master” that takes responsibility for the maintenance of the data storage. This ensures improved reliability as the “master” would otherwise be a weak spot in the system.
With reference to
The robustness of the distributed storage relies on that a reasonable number of copies of each file, correct versions, are stored in the system. In the illustrated case, three copies of each file is stored. However, should for instance the storage node with the address 192.168.1.5 fail, the desired number of stored copies for files “B” and “C” will be fallen short of.
One event that results in a need for replication is therefore the malfunctioning of a storage node in the system.
Each storage node in the system may monitor the status of other storage nodes in the system. This may be carried out by letting each storage node emit a so-called heartbeat signal at regular intervals, as illustrated in
The heartbeat signal may, in addition to the storage node's address, include its node list version number. Another storage node, listening to the heartbeat signal and finding out that the transmitting storage node has a later version node list, may then request that transmitting storage node to transfer its node list. This means that addition and removal of storage nodes can be obtained simply by adding or removing a storage node and sending a new node list version to one single storage node. This node list will then spread to all other storage nodes in the system.
Again with reference to
The detection process may however also reveal other conditions that imply the need for replicating a file. Typically such conditions may be inconsistencies, i.e. that one or more storage nodes has an obsolete version of the file. A delete operation also implies a replication process as this process may carry out the actual physical deletion of the file. The server's delete operation then only need make sure that the storage nodes set a deletion flag for the file in question. Each node may therefore monitor reading and writing operations carried out in the data storage system. Information provided by the server 7 at the conclusion of reading and writing operations, respectively, may indicate that one storage node contains an obsolete version of a file (in the case of a reading operation) or that a storage node did not successfully carry out a writing operation. In both cases there exists a need for maintaining data by replication such that the overall objects of the maintenance process are fulfilled.
In addition to the basic reading and writing operations 63, 65, at least two additional processes may provide indications that a need for replication exists, namely the deleting 67 and updating 69 processes that are now given a brief explanation.
The deleting process is initiated by the server 7 (cf.
The updating process has a search function, similar to the one of the deleting process, and a writing function, similar to the one carried out in the writing process. The server sends a query by multicasting to all storage nodes, in order to find out which storage nodes has data with a specific data identifier. The storage nodes scan themselves for data with the relevant identifier, and respond by a unicast transmission if they have the data in question. The response may include a list, from the storage node directory, of other storage nodes containing the data. The server 7 then sends a unicast request, telling the storage nodes to update the data. The request of course contains the updated data. The storage nodes updating the data sends an acknowledgement to the server, which responds by sending a unicast transmission containing a list with the storage nodes that successfully updated the data, and the storage nodes which did not. Again, this list can be used by the maintenance process.
Again with reference to
Each storage nodes monitors the need for replication for all the files it stores and maintains a replication list 55. The replication list 55 thus contains a number of files that need be replicated. The files may be ordered in correspondence with the priority for each replication. Typically, there may be three different priority levels. The highest level is reserved for files which the storage node holds the last online copy of. Such a file need be quickly replicated to other storage nodes such that a reasonable level of redundancy may be achieved. A medium level of priority may relate to files where the versions are inconsistent among the storage nodes. A lower level of priority may relate to files which are stored on a storage node that is malfunctioning.
The storage node deals with the files on the replication list 55 in accordance with their level of priority. The replication process is now described for a storage node which is here called the operating storage node, although all storage nodes may operate in this way.
The replication part 53 of the maintaining process starts with the operating storage node attempting 71 to become the master for the file it intends to replicate. The operating storage nodes sends a unicast request to become master to other storage nodes that are known store the file in question. The directory 19 (cf.
The next step is to find 73 all copies of the file in question in the distributed storage system. This may be carried out by the operating storage node sending a multicast query to all storage nodes, asking which ones of them have the file. The storage nodes having the file submit responses to the query, containing the version of the file they keep as well as their host lists, i.e. the list of storage nodes containing the relevant file that is kept in the directory of each storage node. These host lists are then merged 75 by the operating storage node, such that a master host list is formed corresponding to the union of all retrieved host lists. If additional storage nodes are found, which were not asked when the operating storage node attempted to become master, that step may now be repeated for the additional storage nodes. The master host list contains information regarding which versions of the file the different storage nodes keep and illustrate the status of the file within the entire storage system.
Should the operating storage node not have the latest version of the file in question, this file is then retrieved 77 from one of the storage nodes that do have the latest version.
The operating storage node then decides 79 whether the host list need to be changed, typically if additional storage nodes should be added. If so, the operating storage node may carry out a process very similar to the writing process as carried out by the server and as described in connection with
In case of version inconsistencies, the operating storage node may update 81 copies of the file that are stored on other storage nodes, such that all files stored have the correct version.
Superfluous copies of the stored file may be deleted 83. If the replication process is initiated by a delete operation, the process may jump directly to this step. Then, as soon as all storage nodes have accepted the deletion of the file, the operating storage node simply requests, using unicast, all storage nodes to physically delete the file in question. The storage nodes acknowledge that the file is deleted.
Further the status, i.e. the master host list of the file is updated. It is then optionally possible to repeat steps 73-83 to make sure that the need for replication no longer exists. This repetition should result in a consistent master host list that need not be updated in step 85.
Thereafter, the replication process for that file is concluded, and the operating storage node may release 87 the status as master of the file by sending a corresponding message to all other storage nodes on the host list.
This system where each storage node takes responsibility for maintaining all the files it stores throughout the set of storage nodes provides a self-repairing (in case of a storage node malfunction) self-cleaning (in case of file inconsistencies or files to be deleted) system with excellent reliability. It is easily scalable and can store files for a great number of different applications simultaneously.
The invention is not restricted to the specific disclosed examples and may be varied and altered in different ways within the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
10160910 | Apr 2010 | EP | regional |
This application is a continuation of U.S. patent application Ser. No. 13/174,350, filed Jun. 30, 2011; which is a continuation-in-part of PCT Application No. PCT/EP2011/056317, filed Apr. 20, 2011; which claims the benefit of European Application No. EP10160910.5, filed Apr. 23, 2010, the disclosures of which is incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
3707707 | Spencer et al. | Dec 1972 | A |
5787247 | Norin et al. | Jul 1998 | A |
6003065 | Yan et al. | Dec 1999 | A |
6021118 | Houck et al. | Feb 2000 | A |
6055543 | Christensen et al. | Apr 2000 | A |
6345308 | Abe | Feb 2002 | B1 |
6389432 | Pothapragada et al. | May 2002 | B1 |
6470420 | Hospodor | Oct 2002 | B1 |
6782389 | Chrin et al. | Aug 2004 | B1 |
6839815 | Kagami et al. | Jan 2005 | B2 |
6925737 | Bolduan et al. | Aug 2005 | B2 |
6985956 | Luke et al. | Jan 2006 | B2 |
7039661 | Ranade | May 2006 | B1 |
7200664 | Hayden | Apr 2007 | B2 |
7206836 | Dinker et al. | Apr 2007 | B2 |
7266556 | Coates | Sep 2007 | B1 |
7320088 | Gawali | Jan 2008 | B1 |
7340510 | Liskov et al. | Mar 2008 | B1 |
7352765 | Dai et al. | Apr 2008 | B2 |
7406484 | Srinivasan et al. | Jul 2008 | B1 |
7487305 | Hill et al. | Feb 2009 | B2 |
7503052 | Castro et al. | Mar 2009 | B2 |
7546486 | Slik et al. | Jun 2009 | B2 |
7568069 | Jantz et al. | Jul 2009 | B2 |
7574488 | Matsubara | Aug 2009 | B2 |
7590672 | Slik et al. | Sep 2009 | B2 |
7593966 | Therrien et al. | Sep 2009 | B2 |
7624155 | Nordin et al. | Nov 2009 | B1 |
7624158 | Slik et al. | Nov 2009 | B2 |
7631023 | Kaiser et al. | Dec 2009 | B1 |
7631045 | Boerries et al. | Dec 2009 | B2 |
7631313 | Mayhew et al. | Dec 2009 | B2 |
7634453 | Bakke et al. | Dec 2009 | B1 |
7647329 | Fischman et al. | Jan 2010 | B1 |
7694086 | Bezbaruah et al. | Apr 2010 | B1 |
7769711 | Srinivasan et al. | Aug 2010 | B2 |
7778972 | Cormie et al. | Aug 2010 | B1 |
7822766 | Arndt et al. | Oct 2010 | B2 |
7840992 | Dufrene et al. | Nov 2010 | B1 |
7873650 | Chapman et al. | Jan 2011 | B1 |
7885982 | Wight et al. | Feb 2011 | B2 |
8060598 | Cook et al. | Nov 2011 | B1 |
8073881 | Georgiev | Dec 2011 | B1 |
8190561 | Poole et al. | May 2012 | B1 |
8255430 | Dutton et al. | Aug 2012 | B2 |
8296398 | Lacapra et al. | Oct 2012 | B2 |
8401997 | Tawri et al. | Mar 2013 | B1 |
8417828 | Ma et al. | Apr 2013 | B2 |
8443062 | Voutilainen et al. | May 2013 | B2 |
8561115 | Hattori et al. | Oct 2013 | B2 |
8577957 | Behar et al. | Nov 2013 | B2 |
8707091 | Gladwin et al. | Apr 2014 | B2 |
20010034812 | Ignatius et al. | Oct 2001 | A1 |
20010047400 | Coates et al. | Nov 2001 | A1 |
20020042693 | Kampe et al. | Apr 2002 | A1 |
20020073086 | Thompson et al. | Jun 2002 | A1 |
20020103888 | Janz et al. | Aug 2002 | A1 |
20020114341 | Sutherland et al. | Aug 2002 | A1 |
20020145786 | Chang et al. | Oct 2002 | A1 |
20030026254 | Sim | Feb 2003 | A1 |
20030120654 | Edlund et al. | Jun 2003 | A1 |
20030126122 | Bosley et al. | Jul 2003 | A1 |
20030154238 | Murphy et al. | Aug 2003 | A1 |
20030172089 | Douceur et al. | Sep 2003 | A1 |
20030177261 | Sekiguchi et al. | Sep 2003 | A1 |
20040059805 | Dinker et al. | Mar 2004 | A1 |
20040064729 | Yellepeddy | Apr 2004 | A1 |
20040078466 | Coates et al. | Apr 2004 | A1 |
20040088297 | Coates et al. | May 2004 | A1 |
20040111730 | Apte | Jun 2004 | A1 |
20040243675 | Taoyama et al. | Dec 2004 | A1 |
20040260775 | Fedele | Dec 2004 | A1 |
20050010618 | Hayden | Jan 2005 | A1 |
20050015431 | Cherkasova | Jan 2005 | A1 |
20050015461 | Richard et al. | Jan 2005 | A1 |
20050038990 | Sasakura et al. | Feb 2005 | A1 |
20050044092 | Adya et al. | Feb 2005 | A1 |
20050055418 | Blanc et al. | Mar 2005 | A1 |
20050177550 | Jacobs et al. | Aug 2005 | A1 |
20050193245 | Hayden et al. | Sep 2005 | A1 |
20050204042 | Banerjee et al. | Sep 2005 | A1 |
20050246393 | Coates | Nov 2005 | A1 |
20050256894 | Talanis et al. | Nov 2005 | A1 |
20050278552 | Delisle et al. | Dec 2005 | A1 |
20050283649 | Turner et al. | Dec 2005 | A1 |
20060031230 | Kumar | Feb 2006 | A1 |
20060031439 | Saffre | Feb 2006 | A1 |
20060047776 | Chieng et al. | Mar 2006 | A1 |
20060080574 | Saito et al. | Apr 2006 | A1 |
20060090045 | Bartlett et al. | Apr 2006 | A1 |
20060090095 | Massa et al. | Apr 2006 | A1 |
20060112154 | Douceur et al. | May 2006 | A1 |
20060218203 | Yamato et al. | Sep 2006 | A1 |
20070022087 | Bahar et al. | Jan 2007 | A1 |
20070022121 | Bahar et al. | Jan 2007 | A1 |
20070022122 | Bahar et al. | Jan 2007 | A1 |
20070022129 | Bahar et al. | Jan 2007 | A1 |
20070055703 | Zimran et al. | Mar 2007 | A1 |
20070088703 | Kasiolas | Apr 2007 | A1 |
20070094269 | Mikesell et al. | Apr 2007 | A1 |
20070094354 | Soltis | Apr 2007 | A1 |
20070189153 | Mason | Aug 2007 | A1 |
20070198467 | Wiser et al. | Aug 2007 | A1 |
20070220320 | Sen et al. | Sep 2007 | A1 |
20070276838 | Abushanab et al. | Nov 2007 | A1 |
20070288494 | Chrin et al. | Dec 2007 | A1 |
20070288533 | Srivastava et al. | Dec 2007 | A1 |
20070288638 | Vuong | Dec 2007 | A1 |
20080005199 | Chen et al. | Jan 2008 | A1 |
20080043634 | Wang | Feb 2008 | A1 |
20080077635 | Sporny | Mar 2008 | A1 |
20080104218 | Liang et al. | May 2008 | A1 |
20080109830 | Giotzbach et al. | May 2008 | A1 |
20080126357 | Casanova | May 2008 | A1 |
20080168157 | Marchand | Jul 2008 | A1 |
20080171556 | Carter | Jul 2008 | A1 |
20080172478 | Kiyohara et al. | Jul 2008 | A1 |
20080198752 | Fan | Aug 2008 | A1 |
20080235321 | Matsuo | Sep 2008 | A1 |
20080244674 | Hayashi | Oct 2008 | A1 |
20080270822 | Fan et al. | Oct 2008 | A1 |
20090043922 | Crowther | Feb 2009 | A1 |
20090083810 | Hattori et al. | Mar 2009 | A1 |
20090132543 | Chatley et al. | May 2009 | A1 |
20090172211 | Perry et al. | Jul 2009 | A1 |
20090172307 | Perry et al. | Jul 2009 | A1 |
20090228669 | Slesarev et al. | Sep 2009 | A1 |
20090271412 | Lacapra et al. | Oct 2009 | A1 |
20090287842 | Plamondon | Nov 2009 | A1 |
20100115078 | Ishikawa et al. | May 2010 | A1 |
20100161138 | Lange et al. | Jun 2010 | A1 |
20100169391 | Baptist et al. | Jul 2010 | A1 |
20100169415 | Leggette et al. | Jul 2010 | A1 |
20100185693 | Murty et al. | Jul 2010 | A1 |
20100198888 | Blomstedt et al. | Aug 2010 | A1 |
20100198889 | Byers et al. | Aug 2010 | A1 |
20100223262 | Krylov et al. | Sep 2010 | A1 |
20100303071 | Kotalwar et al. | Dec 2010 | A1 |
20110055353 | Tucker et al. | Mar 2011 | A1 |
20110072206 | Ross | Mar 2011 | A1 |
20110125814 | Slik et al. | May 2011 | A1 |
20110252204 | Coon et al. | Oct 2011 | A1 |
20120180070 | Pafumi et al. | Jul 2012 | A1 |
20120331021 | Lord | Dec 2012 | A1 |
20130060884 | Bernbo | Mar 2013 | A1 |
20130103851 | Umeki et al. | Apr 2013 | A1 |
20130254314 | Chow | Sep 2013 | A1 |
Number | Date | Country |
---|---|---|
1726454 | Jan 2006 | CN |
0774723 | Jul 1998 | EP |
0934568 | Jun 2003 | EP |
1521189 | Apr 2005 | EP |
1578088 | Sep 2005 | EP |
1669850 | Jun 2006 | EP |
1798934 | Jun 2007 | EP |
2031513 | Mar 2009 | EP |
6-348527 | Dec 1994 | JP |
11-249874 | Sep 1999 | JP |
2000-322292 | Nov 2000 | JP |
2003-030012 | Jan 2003 | JP |
2003-223286 | Aug 2003 | JP |
2003-248607 | Sep 2003 | JP |
2003-271316 | Sep 2003 | JP |
2004-005491 | Jan 2004 | JP |
2007-058275 | Mar 2007 | JP |
2008-250767 | Oct 2008 | JP |
2009-259007 | Nov 2009 | JP |
WO 9938093 | Jul 1999 | WO |
WO 0118633 | Mar 2001 | WO |
WO 0235359 | May 2002 | WO |
WO 0244835 | Jun 2002 | WO |
WO 2004053677 | Jun 2004 | WO |
WO 2006124911 | Nov 2006 | WO |
WO 2007014296 | Feb 2007 | WO |
WO 2007115317 | Oct 2007 | WO |
WO 2007134918 | Nov 2007 | WO |
WO 2008069811 | Jun 2008 | WO |
WO 2008102195 | Aug 2008 | WO |
WO 2009048726 | Apr 2009 | WO |
WO 2010046393 | Apr 2010 | WO |
WO 2010080533 | Jul 2010 | WO |
WO 2011131717 | Oct 2011 | WO |
Entry |
---|
Deering et al., “Multicast Routing in Datagram Internetworks and Extended LANs”, ACM Transactions on Computer Systems, vol. 8, No. 2, May 1990, pp. 85-110. |
Hewlett-Packard Development Company L. P., “HP Volume Shadowing for OpenVMS”, OpenVMS Alpha 7.3-2, Sep. 2003, 162 pages. |
Katsurashima et al., “NAS Switch: a Novel CIFS Server Virtualization”, Proceedings. 20th IEEE/11th NASA Goddard Conference on Mass Storage Systems and Technologies, Apr. 7-10, 2003, pp. 82-86. |
Kronrnberg et al., “VAXclusters: A Closeley-Coupled Distributed System”, ACM Transactions on Computer Systems, vol. 4, No. 2, May 1986, pp. 130-146. |
Parris, Keith, “Using OpenVMS Clusters for Disaster Tolerance”, System/Software Engineer, HP Services—Systems Engineering, 2003, 27 pages. |
SAP Library, “Queues for Prioritized Message Processing”, SAP Exchange Infrastructure, Available online at http://help.sap.com/saphelp—nw04/helpdata/en/04/827440c36ed562e10000000a155106/content.htm, Feb. 6, 2009, pp. 1-2. |
squid-cache.org, “Squid Configuration Directive Reply—Body—Max—Size”, Available online at http://www.squid-cache.org/Doc/config/reply—body—max—size/, Dec. 21, 2008, pp. 1-2. |
Suryanarayan et al., “Performance Evaluation of New Methods of Automatic Redirection for Lord Balancing of Apache Servers Distributed in the Internet”, Proceedings. 25th Annual IEEE Conference on Local Computer Networks, Nov. 8-10, 2000, pp. 644-651. |
Tang et al., “An Efficient Data Location Protocol for Self-Organizing Storage Clusters”, Supercomputing, ACM/IEEE Conference, Phoenix, AZ, USA, Nov. 15-21, 2003, 13 pages. |
Trustwave, “How Do I Block Large Files by Content Size Storage Clusters”, Supercomputing, ACM/IEEE Conference, Phoenix, AZ, USA, Nov. 15-21, 2003, 13 pages. |
Weatherspoon et al., “Antiquity: Exploiting a Secure Log for Wide-Area Distributed Storage”, Proceedings of the EuroSys Conference, ACM 2007, Lisbon, Portugal, Mar. 21-23, 2007, pp. 371-384. |
Wikipedia, “FastTrack”, Available online at: http://de.wikipedia.org/w/index.php?title=FastTrack&01did=83614953, Jan. 8, 2011, pp. 1-2. |
Wikipedia, “Load Balancing (Computing)”, Available online at http://www.en.wikipedia.org/w/index.php?title=Load—balancing—%28computing%29&01did=446655159, Aug. 25, 2011, pp. 1-7. |
Zhang et al., “Brushwood: Distributed Trees in Peer-to-Peer Systems”, Peer-to-Peer Systems IV Lecture Notes in Computer Science vol. 3640, 2005, pp. 47-57. |
Number | Date | Country | |
---|---|---|---|
20140379845 A1 | Dec 2014 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13174350 | Jun 2011 | US |
Child | 14485502 | US |
Number | Date | Country | |
---|---|---|---|
Parent | PCT/EP2011/056317 | Apr 2011 | US |
Child | 13174350 | US |