1. Field of the Invention
The present invention relates to distributed computing in a network having multiple computing nodes. More particularly, the present invention relates to providing distributed computing services based on utilizing multiple computing nodes for sharing tasks associated with distributed computing services to provide a virtualization of distributed computing over a network of heterogeneous resources.
2. Description of the Related Art
Distributed computing has evolved to technologies that enable multiple devices to share in providing a given service. For example, grid computing has been proposed as a model for solving substantially large computational problems using large numbers of computers arranged as clusters embedded in a distributed infrastructure. Grid computing has been described as differentiated from distributed computing through an increased focus on resource sharing, coordination, manageability, and high performance. The focus on resource sharing has been referred to as “the grid problem”, which has been defined as the set of problems associated with resource sharing among a set of individuals or groups.
A fundamental problem of distributed systems is the assumption that each peer computing node is substantially the same size. For example, a RAID (Redundant Array of Inexpensive Disks) storage device is implemented based on implementing a plurality of identical sized discs: a write operation to a disk at a prescribed disk block location can be easily repeated on the remaining disks of the RAID storage device by performing the same write operation to the same prescribed disk block location. Hence, existing systems do not contemplate the problem that a given peer computing node may run out of resources while attempting to perform its own tasks. In particular, if a distributed system is implemented using computing nodes (which may share resources such as computing power, storage capacity, bandwidth, etc.), where each computing node is responsible not only for storing its own data, but also for backing up data for other nodes, then a problem arises if a substantially larger node having a substantially larger amount of data (i.e., at least an order of magnitude larger) joins the network because the larger node will overwhelm the capacity of the smaller nodes of the distributed system. Consequently, a smaller node would be forced to either not back up the data of the larger node, resulting in a loss of data if the larger node is unavailable, or no longer store the data that it is responsible for, resulting in a loss of that data. Hence, the smaller nodes are incapable of storing their own data and backing up the data of the larger node.
In addition, attempts to partition a substantially larger computing node into multiple virtual computing nodes having a size that matches the existing computing nodes does not solve the problem of losing data, since a random distribution of the virtual computing nodes among the existing computing nodes still may result in one virtual computing node backing up the data of a second virtual computing node, wasting resources by replicating data on the same physical device, namely the larger computing node; further, a loss of the substantially larger computing node will result in a loss of all the multiple virtual computing nodes, hence the distributed network can encounter a loss of data based on relying on the virtual nodes backing up each other.
Consequently, newer, high-performance machines cannot be added to the network, since the newer machines must have the same capacity as the existing machines to prevent overwhelming the existing machines. Alternately, the newer machines must be configured to limit their resource utilization to the capacity of the existing machines, preventing the additional capacity of the newer machines from being utilized.
There is a need for an arrangement that enables services to be provided in a network by multiple computing nodes, where the behavior of all the computing nodes in a computing group are equivalent such that each of the computing nodes provide equivalent service levels. Such equivalent service levels provide a perception for a client that the same read/writable data can be accessed, in a manner that provides data replication, fault tolerance, and load balancing, from any one of the computing nodes in the computing group. The establishment of equivalent service levels for all computing nodes within the computing group enable larger nodes to provide more services and/or data based on joining more computing groups, whereas smaller computing nodes will supply fewer services and/or data based on joining a fewer number of computing groups.
There also is a need for an arrangement that enables computing nodes having substantially different capacity levels (i.e., at least in order of magnitude of 10 times) to participate in distributed computing operations in a distributed network, without overwhelming the capacity of any one computing node participating in the distributed computing operations.
There also is a need for an arrangement that enables a network to employ distributed computing in a manner that self-adapts to changes in service requirements, without loss of data or necessity of manual reconfiguration of network nodes.
There also is a need for an arrangement that enables each of the computing nodes in a distributed network to adaptively participate in as many distributed services as desirable, based on the capacity of the corresponding computing node, where larger computing nodes having substantially larger capacity can contribute to providing a greater amount of distributed services relative to smaller computing nodes that provide a smaller amount of distributed services based on the respective capacity.
These and other needs are attained by the present invention, where a network provides distributed computing services based on participation in respective resource groups by computing nodes, each resource group including a corresponding resource requirement for any computing node that joins the corresponding resource group for execution of the corresponding distributed computing service. Each computing node, in response to determining its corresponding available node capacity, is configured for selectively creating and joining at least one new resource group for execution of a corresponding distributed computing service having a corresponding resource requirement, and/or selectively joining at least one of the available resource groups, based on the corresponding available node capacity satisfying the corresponding resource requirement. Each computing node also is configured for selectively leaving any one of the joined resource groups based on determined conditions. Hence, distributed computing services are provided by multiple computing nodes, where each computing node may choose to participate in as many resource groups as needed based on the corresponding available node capacity.
One aspect of the present invention provides a method in a computing node. The method includes determining an available node capacity for the computing node relative to any resource groups having been joined by the computing node. Each resource group has a corresponding resource requirement for providing a corresponding distributed computing service within a network. The method also includes selectively creating and joining a first of the resource groups based at least on determining that the corresponding resource requirement is less than the available node capacity, and selectively joining, within the network, a second of the resource groups based on the corresponding resource requirement of the second resource group being less than the available node capacity. The method also includes executing the distributed computing services for the resource groups having been joined by the computing node, according to the respective resource requirements. The selective creation and joining of resource groups enables each computing node to decide whether it prefers to create a new resource group, for example due to attributes of the computing node, including creation of the service upon initialization of the network. Moreover, the selective joining of a second of the resource groups enables the computing node to join as many resource groups as desired, based on the available node capacity. Consequently, computing nodes of differing sizes can join as many or as few resource groups as desired, based on the relative available node capacity.
Another aspect of the present invention provides a method in a network. The method includes providing an identification of resource groups, each resource group having a corresponding resource requirement for providing a corresponding distributed computing service within the network. The method also includes, in each computing node of the network, determining a corresponding available node capacity relative to any of said resource groups having been joined by the computing node, selectively joining at least one of the resource groups based on the corresponding resource requirement being less than the available node capacity, and executing the distributed computing services for the resource groups having been joined by the computing node, according to the respective resource requirements. The method also includes identifying the computing nodes having joined the resource groups, and providing connectivity for a user node, requesting one of the distributed computing services, to one of the computing nodes having joined the corresponding resource group.
Additional advantages and novel features of the invention will be set forth in part in the description which follows and in part will become apparent to those skilled in the art upon examination of the following or may be learned by practice of the invention. The advantages of the present invention may be realized and attained by means of instrumentalities and combinations particularly pointed out in the appended claims.
Reference is made to the attached drawings, wherein elements having the same reference numeral designations represent like elements throughout and wherein:
The directory service 12, implemented for example as one of the distributed services provided by one of the resource groups 16, is configured for providing resolutions for identifying available resource groups 16 for client nodes 18 attempting to obtain a corresponding distributed service, and computing nodes 14 attempting to join a resource group to provide distributed processing. The query issued may specify different attributes about the service, for example service type (e.g., e-mail, e-commerce, accounting, database management), data type (e.g., e-mail data for user names starting with A-L and not L-Z), or some other class identification (e.g., corporate, engineering, marketing, legal, etc.). Additional details related to the directory service 12 can be obtained from commonly-assigned, application Ser. No. 11/000,041, filed Dec. 1, 2004, entitled “ARRANGEMENT IN A SERVER FOR PROVIDING DYNAMIC DOMAIN NAME SYSTEM SERVICES FOR EACH RECEIVED REQUEST”, issued as U.S. Pat. No. 7,499,998, the disclosure of which is incorporated in its entirety herein by reference.
Each computing node 14 is configured for selectively joining an available resource group 16, or creating a new resource group 16, based on determining whether the available node capacity for the computing node 14 is sufficient for the resource requirement 20 specified for the corresponding resource group 16. Once a computing node 14 has joined a resource group 16, the joining of the computing node 14 with the resource group 16 is registered with the directory service 12, and client nodes 18 can be connected to a computing node 14 that provides the corresponding distributed service, based on the directory service 12 responding to a query identifying one of the distributed computing services by redirecting the client node 18 to one of the computing nodes 14 that joined the appropriate resource group 16.
Since a computing node 14 is configured for selectively joining a resource group 16 based on the computing node 14 deciding whether it has sufficient available node capacity for the corresponding resource requirement 20, smaller computing nodes 14 (e.g., N1, N2, N5, N6) that have only a limited amount of resources, for example only 160 gigabytes (GB) of storage capacity (or, for example, a limited CPU processing capacity), are able to provide a contribution to available resource groups 16 based on their relative available node capacities. For example, computing nodes N1 and N2 belong only to resource groups S1 and S2, computing node N5 belongs only to resource groups S3 and S4, and computing node N6 belongs only to resource groups S1 and S4.
Computing nodes 14 having substantially larger available node capacities, for example at least an order of magnitude greater (i.e., a difference of at least 10 times) (e.g., N4 having 1600GB (1.6TB) storage capacity, N3 having 2.5 Terabyte (TB) storage capacity) also can join the same resource groups 16 as the smaller computing nodes, since each computing node (e.g., N1, N2, N3, N4, N6) having joined a given resource group (e.g., S1) is required to provide only the amount of resources specified by the corresponding resource requirement 20 (R1). Hence, different sized computing nodes 14 can join the same resource group 16, even if the computing nodes 14 differ in size by orders of magnitude.
In addition, since each computing node 14 selectively joins a resource group 16 based on whether the corresponding resource requirement 20 is less than the available node capacity, larger nodes (e.g., N3) can join a greater number of resource groups 16, enabling the larger computing node 14 to provide virtualized services to a substantially larger number of resource groups. As illustrated in
Hence, each resource group 16 can be defined based on the attributes 26 of the services being provided, as well as the attributes 28 of the data which is replicated among the computing nodes 14. Also apparent from the foregoing is that each of the computing nodes 14 that belong to a given resource group 16 can respond to a given service attribute 26 that specifies that all data is to be replicated among the computing nodes of the resource group 16, and that each computing node 14 of the resource group 16 has authority to modify the data or create new data, and a requirement to update the other computing nodes 14 of any modification or creation to ensure data is synchronized. An exemplary method of replicating data among the computing nodes 14 is disclosed in commonly-assigned, copending application Ser. No. 10/859,209, filed Jun. 3, 2004, entitled “ARRANGEMENT IN A NETWORK NODE FOR SECURE STORAGE AND RETRIEVAL OF ENCODED DATA DISTRIBUTED AMONG MULTIPLE NETWORK NODES”, the disclosure of which is incorporated in its entirety herein by reference. In addition, only the computing nodes 14 that belong to the resource group 16 have authority to modify the associated data, such that non-members cannot modify the data of the resource group. Ownership of authority to modify data is described in commonly-assigned, copending application Ser. No. 10/859,208, filed Jun. 3, 2004, entitled “ARRANGEMENT IN A NETWORK FOR PASSING CONTROL OF DISTRIBUTED DATA BETWEEN NETWORK NODES FOR OPTIMIZED CLIENT ACCESS BASED ON LOCALITY”, the disclosure of which is incorporated in its entirety herein by reference.
Hence, each computing node 14 is able to determine whether it wants to join any given resource group 16 based on comparing the resource group attributes 24 with internal computing node attributes (not shown) that specify preferences for the types of distributed services the computing node 14 should provide. For example, a computing node 14 may include internal computing node preferences (not shown) to indicate the computing node 14 should avoid database management services or financial transaction services, but should join any distributed services associated with a prescribed class of service, for example e-mail server applications, Web hosting applications, Voice over IP applications, etc. In addition, the computing node 14, upon determining that it wishes to join a given resource group, can compare the resource requirement 20 of that resource group 16 with the available node capacity 32 in order to determine whether the computing node 14 has sufficient available resources to join that resource group 16.
As illustrated in
The computing node 14 also includes a resource monitor 62 configured for continually monitoring the resource utilization in each of the computing node resources 44, 46, 48, and 50, and updating the resource table 30 to indicate the amount of computing node resources that are consumed by each of the joined resource groups 16 relative to the reserved capacity as specified in the resource table 30. The resource monitor 62 also is configured for determining the available node capacity 32 based on comparing the total system capacity minus the reserved capacity 34 that has already been allocated. As described below, the resource monitor 62 also is configured for comparing the amount of consumed resources relative to reserved capacity for a given resource group, and predicting when the amount of consumed resources for a resource group may exceed the resource requirements of the resource group, necessitating a split in the resource group.
The computing node 14 also includes a resource group arbitration module 60 configured for reading the resource table 30 in order to identify the available node capacity 32 determined by the resource monitor 62. The resource group arbitration module 60 is configured for selectively creating a new resource group 16, as needed for example due to internal attributes including administrator settings, etc. The resource group arbitration module 60 also is configured for identifying available resource groups 16, for example by accessing the directory service 12, and selectively joining resource groups 16 based on the associated group attributes 24, and also based on whether the resource requirement 20 of the available resource group 16 is less than the available node capacity 32. Based on the available node capacity 32 being sufficient for the resource requirement 20, the arbitration module 60 can allocate reserve capacity 34 and join the resource group 16, if desired. If after joining the resource group 16 and allocating the corresponding reserve capacity 34 the arbitration module 60 identifies that available node capacity is still present, the arbitration module 60 can continue to selectively join additional resource groups 16 based on the available node capacity 32 being sufficient for the corresponding resource requirement 20.
The method begins in step 100 of
The resource monitor 62 continually monitors all consumption of resources in order to determine if consumption of resources by an assigned consumer (e.g., a resource group 16) reaches a prescribed percentage threshold (e.g., 95%) relative to the amount of reserved capacity 34. For example, the resource monitor 62 may employ a first-order instantaneous resource consumption evaluation based on comparing the instantaneous resource consumption (e.g., stored S1 data 55a) relative to the reserved resource 54a required by the corresponding resource requirement 20. The resource monitor 62 also may employ a second-order instantaneous resource consumption evaluation (e.g., running average, projected utilization) by predicting when the resource consumption (e.g., 55a) may exceed the reserved resource (e.g., 54a). Third-order instantaneous resource consumption evaluation may be employed by the resource monitor 62 to identify “hot spots” (also known as “jerk”), and fourth-order evaluation may be used to assess data stability, and locality. Any one of these factors may be used to determine whether to split or merge a resource group, described below.
The arbitration module 60 determines in step 102 whether any new resource groups 16 need to be created, for example based on internal attributes that may identify the absence of a prescribed distributed service in the network 10. For example, at network start up if the computing node 14 is the first computing node to enter the network 10, the computing node 14 will begin to generate resource groups 16 in order to begin providing distributed computing services in the network 10. In this example of network startup, one of the first resource groups 16 created may be used to provide the directory service 12.
If new resource groups are to be created, and if in step 104 the resource requirement 20 is less than the available node capacity 32, the arbitration module 60 selectively creates in step 106 a new resource group 16 by creating the resource requirements 20, the attributes 24, and reserving in step 108 the capacity 34 needed to join the resource group 16. If necessary, the arbitration module 60 registers in step 110 with the directory service 12 indicating that the computing node 14 (e.g., “N1”) is a member of the resource group 16 (e.g., “S1”). The computing node 14 can begin executing distributed computing services in step 112, for example replication of data among other computing nodes 14 belonging to the same resource group 16, or serving a connected client node 18, based on instantiating the appropriate process 58a for the distributed computing service.
If in step 102 no new resource groups need to be created, the arbitration module 60 identifies in step 114 any available resource groups (e.g., “S2”) 16, for example based on issuing a query to the directory service 12 that requests identification of either all available resource groups 16, or selected resource groups having selected service attributes 24 (e.g., e-mail application service) or selected data attributes 28. In response to receiving from the directory service 12 at least one resource requirement 24 for a corresponding resource group 16, the resource group arbitration module 60 determines in step 116 whether the resource requirement 20 for joining the available resource group 16 is less than the available node capacity 32. If the resource requirement 20 is not less than the available node capacity 32 (i.e., the resource requirement 20 exceeds the available node capacity 32), the resource group arbitration module 60 decides to not join the resource group, and continues with the steps of
If in step 116 the resource group arbitration module 60 determines that the resource requirement 20 does not exceed the available node capacity 32, the resource group arbitration module 60 reserves capacity in step 118 from the available node capacity 32, creates an additional entry 34 for the reserved capacity, and joins the resource group 16.
The resource group arbitration module 60 finishes joining the resource group 16 by notifying the other computing nodes 14 already belonging the resource group 16 of its having joined the resource group 16. The other computing nodes already belonging to the resource group 16 can then replicate their data associated with the resource group 16 to the newly-joined computing node 16 for storage in the associated reserved data storage 54, enabling the computing resources 42 to execute in step 120 the distributed computing services, for example by instantiating the relevant software processes (e.g., 58a). If in step 122 the resource group arbitration module 60 decides that more resource groups 16 should be joined (e.g., based on the available node capacity 32 being substantially larger than any of the reserved capacity entries 34), steps 114 through 120 are repeated, else the resource group arbitration module continues with the steps as illustrated in
If in step 124 the resource group arbitration module 60 determines to leave a joined group 16, for example due to reduced activity in the resource group 16, or based on determining that the available node capacity has diminished below a prescribed threshold, the resource group arbitration module 60 leaves the resource group 16 in step 126 by notifying the other members of the resource group, and reclaiming the capacity 34 that had been reserved for the resource group 16, and returning the reclaimed capacity back to the available node capacity 32. Note that even though a computing node 14 implemented as a laptop could decide to leave a joined group in anticipation of being disconnected from the network 10, the disclosed embodiment does not necessarily require that the computing node 14 leaves the joined group 16 merely because it will be disconnected from the other computing nodes 14; rather, the computing node 14 still could continue to provide distributed services while disconnected from the other computing nodes 14 of the resource group, and simply resynchronize data and state information upon reconnection with the other computing nodes 14.
The resource group arbitration module 60 of any one of the joined nodes 14 can determine in step 128 of
The resource group arbitration module 60 of that node (N3) arbitrates in step 130 with the arbitration modules 60 of the other computing nodes (N1, N2, N4, N6) 14 belonging to the resource group 16 for dividing the resource group S1 into divided resource groups 16a and 16b that will have their own respective resource group attributes 22, including computing service attributes 24 and resource requirements 20. Upon completion of the arbitration in step 130, each of the computing nodes in step 132 either leaves the old group “S1” 16 and joins the newly divided group “S1B” 16b, or stays in the remaining group 16; depending on the nature of the resource groups, each computing node may reclaim the capacity having been transferred to the second group “S1B” 16b, effectively joining the divided group “S1A” 16a based on the transformation of group “S1” into “S1A” upon closing of resources associated with the resources “A2” and “DB” of group “S1B”; alternately, the resource requirement for the newly divided group may equal the original resource requirement, effectively doubling the capacity of the system due to a creation of another resource group having the same requirement.
Hence, the computing nodes N1, N2, and N3 join the divided computing group “S1A” 16a, and the computing nodes N3, N4, and N6 join the divided computing group “S1B” 16b, where the divided computing group “S1A” 16a replicates data for the data class “DA” and provides distributed services associated with application process “A1”, and divided computing group “S1B” 16b replicates data for the data class “DB” and provides distributed services associated with the application process “A2”.
Hence, the splitting of the resource group “S1” 16 into the two divided resource groups “S1A” 16a and “S1B” 16b enables capacity in the network to be increased, since the respective resource requirements 20 of the divided resource groups “S1A” 16a and “S1B” 16b (e.g., 20GB storage each) are equal to the original resource requirements 15 of the original resource group 16, resulting in a doubling in system capacity. In addition, splitting may be effective for dividing distributed services that have mutually exclusive data operations, for example dividing an e-mail service based on logically dividing a population e-mail subscribers between the two resource groups 16a and 16b. However, the larger nodes (e.g., N3) still can participate in both resource groups 16a and 16b, resulting in a service virtualization of the node N3 into two distinct virtual computing nodes without any loss of computing resources or risk of data loss.
If at some time the resource group arbitration module 60 of a node (e.g., N4) determines in step 134 a desire to merge the existing resource group (e.g., “S1B”) 16b with another resource group (e.g., “S1A”) 16a, for example due to diminished resource requirements 20, or a convergence of resource group attributes 22, the resource group arbitration module 60 of computing node N4 arbitrates in step 136 with the arbitration modules 60 of the computing nodes 14 of both groups 16a (N1, N2, N3) and 16b (N3, N6) to decide whether to merge the groups 16a and 16b, for example based on comparing the associated resource group attributes 22. Assuming all the computing nodes that belong to the resource groups 16a and/or 16b decide to merge, the resource group arbitration modules 16 of the computing nodes 14 merge the two resource groups 16a and 16b in step 138 into a single merged resource group 16 (S1) based on replicating data among each other, and reserving capacity accordingly. The overall process in each computing node 14 repeats itself as described above with respect to
As apparent from the foregoing, there may be provisions to allow instantaneous consumption to exceed the resource capacity for a limited time, in order to accommodate instances such as when one computing node fails, the other computing nodes may be over-burdened temporarily. Further, the decision whether to split or merge is not limited to evaluating instantaneous consumption, as each element of the resource group attributes 22 may have specific semantic information that is not related to the instantaneous consumption (e.g., security considerations, day of week, etc). Any one of these factors may be relevant in determining whether to split or merge a resource group.
According to the disclosed embodiment, distributed services in a network are provided by computing nodes having substantially different node capacities, based on each computing node selectively joining a resource group having a resource requirement that is less than the available node capacity. Each computing node that joins a resource group, however, shares the same amount resources as specified by the resource requirement for the resource group, regardless of the overall capacity of the computing node. Hence, substantially larger computing nodes can join a larger number of resource groups, providing distributed service virtualization without concerns of loss of data if the larger computing node becomes unavailable. In addition, substantially larger computing nodes can be added to the network 10 without any adverse effect on existing computing nodes 14 already in the network. In addition, each computing node 14 retains its own control to decide whether it should create a new computing group, join an existing computing group, or leave a computing group; each computing node 14 also can arbitrate with other computing nodes of the resource group to determine whether the resource group should be split, or merged with another resource group, providing an adaptive and scalable distributed computing system that can automatically adapt to changes.
While the disclosed embodiment has been described in connection with what is presently considered to be the most practical and preferred embodiment, it is to be understood that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.
This application is a continuation of application Ser. No. 13/221,625, filed Aug. 30, 2011, which is a continuation of application Ser. No. 11/053,954, filed Feb. 10, 2005 and issued Nov. 1, 2011 as U.S. Pat. No. 8,051,170.
Number | Name | Date | Kind |
---|---|---|---|
5049873 | Robins et al. | Sep 1991 | A |
5428793 | Odnert et al. | Jun 1995 | A |
5555417 | Odnert et al. | Sep 1996 | A |
5951694 | Choquier et al. | Sep 1999 | A |
6014669 | Slaughter et al. | Jan 2000 | A |
6195682 | Ho et al. | Feb 2001 | B1 |
6311251 | Merritt et al. | Oct 2001 | B1 |
6392705 | Chaddha | May 2002 | B1 |
6418477 | Verma | Jul 2002 | B1 |
6421687 | Klostermann | Jul 2002 | B1 |
6466936 | Ronstrom | Oct 2002 | B1 |
6516350 | Lumelsky et al. | Feb 2003 | B1 |
6556544 | Lee | Apr 2003 | B1 |
6697064 | Kilgard et al. | Feb 2004 | B1 |
6865527 | Go et al. | Mar 2005 | B2 |
6970913 | Albert et al. | Nov 2005 | B1 |
6975613 | Johansson | Dec 2005 | B1 |
6985976 | Zandonadi et al. | Jan 2006 | B1 |
7003575 | Ikonen | Feb 2006 | B2 |
7017016 | Chujo et al. | Mar 2006 | B2 |
7047177 | Lee et al. | May 2006 | B1 |
7076783 | Frank et al. | Jul 2006 | B1 |
7089192 | Bracchitta et al. | Aug 2006 | B2 |
7095739 | Mamillapalli et al. | Aug 2006 | B2 |
7111147 | Strange et al. | Sep 2006 | B1 |
7117257 | Beshai | Oct 2006 | B2 |
7136903 | Phillips et al. | Nov 2006 | B1 |
7162476 | Belair et al. | Jan 2007 | B1 |
7203871 | Turner et al. | Apr 2007 | B2 |
7216090 | LaCroix | May 2007 | B2 |
7272645 | Chang et al. | Sep 2007 | B2 |
7272652 | Keller-Tuberg | Sep 2007 | B1 |
7299410 | Kays et al. | Nov 2007 | B2 |
7421502 | Czap et al. | Sep 2008 | B2 |
7457835 | Toebes et al. | Nov 2008 | B2 |
7478149 | Joshi et al. | Jan 2009 | B2 |
7499998 | Toebes et al. | Mar 2009 | B2 |
7529822 | Joshi et al. | May 2009 | B2 |
7543020 | Walker et al. | Jun 2009 | B2 |
7574523 | Traversat et al. | Aug 2009 | B2 |
8051170 | Turner et al. | Nov 2011 | B2 |
8239540 | Turner et al. | Aug 2012 | B2 |
20020010692 | Sasagawa et al. | Jan 2002 | A1 |
20020010783 | Primak et al. | Jan 2002 | A1 |
20020077791 | Go et al. | Jun 2002 | A1 |
20020103893 | Frelechoux et al. | Aug 2002 | A1 |
20020114341 | Sutherland et al. | Aug 2002 | A1 |
20020161835 | Ball et al. | Oct 2002 | A1 |
20020169861 | Chang et al. | Nov 2002 | A1 |
20020188657 | Traversat et al. | Dec 2002 | A1 |
20030026268 | Navas | Feb 2003 | A1 |
20030035380 | Downing et al. | Feb 2003 | A1 |
20030051117 | Burch et al. | Mar 2003 | A1 |
20030069974 | Lu et al. | Apr 2003 | A1 |
20030074256 | LaCroix | Apr 2003 | A1 |
20030074453 | Ikonen | Apr 2003 | A1 |
20030084157 | Graupner et al. | May 2003 | A1 |
20030149847 | Shyam et al. | Aug 2003 | A1 |
20030154238 | Murphy et al. | Aug 2003 | A1 |
20030185205 | Beshai | Oct 2003 | A1 |
20030204273 | Dinker et al. | Oct 2003 | A1 |
20040010588 | Slater et al. | Jan 2004 | A1 |
20040039891 | Leung et al. | Feb 2004 | A1 |
20040047354 | Slater et al. | Mar 2004 | A1 |
20040054656 | Leung et al. | Mar 2004 | A1 |
20040098447 | Verbeke et al. | May 2004 | A1 |
20040153708 | Joshi et al. | Aug 2004 | A1 |
20040186845 | Fukui | Sep 2004 | A1 |
20040194098 | Chung et al. | Sep 2004 | A1 |
20040204949 | Shaji et al. | Oct 2004 | A1 |
20040208625 | Beshai et al. | Oct 2004 | A1 |
20040210767 | Sinclair et al. | Oct 2004 | A1 |
20040215650 | Shaji et al. | Oct 2004 | A1 |
20040230596 | Veitch et al. | Nov 2004 | A1 |
20040257857 | Yamamoto et al. | Dec 2004 | A1 |
20050021349 | Cheliotis et al. | Jan 2005 | A1 |
20050027801 | Kashyap et al. | Feb 2005 | A1 |
20050036443 | Collins | Feb 2005 | A1 |
20050050545 | Moakley | Mar 2005 | A1 |
20050060406 | Zhang et al. | Mar 2005 | A1 |
20050114478 | Popescu et al. | May 2005 | A1 |
20050144173 | Yamamoto et al. | Jun 2005 | A1 |
20050257220 | McKee | Nov 2005 | A1 |
20050283649 | Turner et al. | Dec 2005 | A1 |
20060179037 | Turner et al. | Aug 2006 | A1 |
20060179106 | Turner et al. | Aug 2006 | A1 |
20060206621 | Toebes et al. | Sep 2006 | A1 |
20070086433 | Cunetto et al. | Apr 2007 | A1 |
20070116234 | Schneider et al. | May 2007 | A1 |
20090024868 | Joshi et al. | Jan 2009 | A1 |
20090276588 | Murase | Nov 2009 | A1 |
Entry |
---|
Cates, “Robust and Efficient Data Management for a Distributed Hash Table”, Master's Thesis, Massachusetts Institute of Technology, May 2003, pp. 1-64. |
Mockapetris, “Domain Names—Concepts and Facilities”, Network Working Group, Request for Comments: 1034, Nov. 1987, pp. 1-55. |
Mockapetris, “Domain Names—Implementation and Specification”, Network Working Group, Request for Comments: 1035, Nov. 1987, pp. 1-55. |
Gulbrandsen et al., “A DNS RR for specifying the location of services (DNS SRV)”, Network Working Group, Request for Comments: 2782, Feb. 2000, pp. 1-12. |
Calhoun et al., “Diameter Base Protocol”, Network Working Group, Request for Comments: 3588, Sep. 2003, pp. 1-147. |
Yokota et al., “A Proposal of DNS-Based Adaptive Load Balancing Method for Mirror Server Systems and Its Implementation”, Proceedings of the 18th International Conference on Advanced Information Networking and Application (AINA '04), vol. 2, Mar. 29-31, 2004, pp. 1-6, Fukuoka, Japan. |
“Dynamic Domain Name Service”, [online], 2003, [retrieved on Nov. 2, 2004]. Retrieved from the Internet: <URL: http://www.dydns.com/services/services.htm, pp. 1-5. |
Wikipedia, “Grid computing”, [online], Jan. 25, 2005, [retrieved on Jan. 26, 2005]. Retrieved from the Internet: <URL: http://wikipedia.org/wiki/Grid—computing>, 3 pages. |
Linksys, “Linksys and Tzolkin Corporation Team-Up to Bundle TZO Dynamic DNS Service with Linksys' Top-Selling Cable/DSL Routers”, [online], Aug. 14, 2000 [retrieved on Nov. 2, 2004]. Retrieved from the Internet: <URL: http://www.linksys.com/press/press.asp?prid=3>, pp. 1-2. |
Karger et al., “Finding Nearest Neighbors in Growth-restricted Metrics”, ACM Symposium on Theory of Computing (STOC '92), May 19-21, 2002, Montreal, Quebec, Canada, 10 pages. |
Anderson et al., “Global namespace for files”, IBM Systems Journal, vol. 48, No. 4, 2004, pp. 702-722. |
Bourbonnais et al., “Towards an information infrastructure for the grid”, IBM Systems Journal, vol. 43, No. 4, 2004, pp. 665-688. |
Carpenter et al., “Abstract interdomain security assertions: A basis for extra-grid virtual organizations”, IBM Systems Journal, vol. 43, No. 4, 2004, pp. 689-701. |
Dabek et al., “Wide-area cooperative storage with CFS”, SOPS '01, Oct. 21-24, 2001, Banff, Canada, 14 pages. |
Burkard, “Herodotus: A Peer-to-Peer Web Archival System”, Master's Thesis, Massachusetts Institute of Technology, May 2002, pp. 1-64. |
Stoica et al., Chord: A Scalable Peer-to-peer Lookup Service for Internet Applications, SIGCOMM '01, Aug. 27-31, 2001, San Diego, California, ACM, pp. 1-12. |
Dabek, “A Cooperative File System”, Master's Thesis, Massachusetts Institute of Technology, Sep. 2001, pp. 1-55. |
Cox et al., “Serving DNS using a Peer-to-Peer Lookup Service”, In the proceedings of the First International Workshop on Peer-to-Peer Systems (IPTPS '02), Mar. 2002, Cambridge, MA, pp. 1-7. |
Butte, “Solving the data warehouse dilemma with grid technology”, IBM Global Services, Aug. 2004, 12 pages. |
Horn et al., “A Logger System based on Web services”, IBM Systems Journal, vol. 43, No. 4, 2004, pp. 723-733. |
Dabek et al., “Building Peer-to-Peer Systems With Chord, a Distributed Lookup Service”, Proceedings of the 8th Workshop on Hot Topics in Operating Systems (HotOS-VIII), May 2001, 6 pages. |
Liben-Nowell et al, “Observations on the Dynamic Evolution of Peer-to-Peer Networks”, In the proceedings of the First International Workshop on Peer-to-Peer Systems (IPTPS '02), Mar. 2002, Cambridge, MA, 6 pages. |
Joseph et al., “Evolution of grid computing architecture and grid adoption models”, IBM Systems Journal, vol. 43, No. 4, 2004, pp. 624-645. |
Lewis et al., “MyMED: A database system for biomedical research on MEDLINE data”, IBM Systems Journal, vol. 43, No. 4, 2004, pp. 756-767. |
Meliksetian et al., “Design and implementation of an enterprise grid”, IBM Systems Journal, vol. 43, No. 4, 2004, pp. 646-664. |
Stoica et al., Chord: A Scalable Peer-to-peer Lookup Protocol for Internet Applications, ACM SIGCOMM 2001, San Diego, Aug. 2001, pp. 149-160. |
Petersen et al., “A Blueprint for Introducing Disruptive Technology into the Internet”, Proceedings of the First ACM Workshop on Hot Topics in Networks (HotNets-1), Princeton, NJ, Oct. 2002, 7 pages. |
Liben-Nowell et al., “Analysis of the Evolution of Peer-to-Peer Systems”, ACM Conf. On Principles of Distributed Computing (PODC), Monterey, CA, Jul. 2002, 10 pages. |
“Preface”, IBM Systems Journal, vol. 43, No. 4, 2004, pp. 622-623. |
Sit et al., “Security Considerations for Peer-to-Peer Distributed Hash Tables”, In the proceedings of the First International Workshop on Peer-to-Peer Systems (IPTPS '02), Mar. 2002, Cambridge, MA, pp. 1-6. |
Tan et al., “Sevice domains”, IBM Systems Journal, vol. 43, No. 4, 2004, pp. 734-755. |
Wired Magazine, “The BitTorrent Effect”, [online], 2005, [retrieved on Jan. 26, 2005]. Retrieved from the Internet: <URL: http://www.wired.com/wired/archive/13.01/bittorrent—pr/html>, 8 pages. |
Number | Date | Country | |
---|---|---|---|
20120271944 A1 | Oct 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13221625 | Aug 2011 | US |
Child | 13541621 | US | |
Parent | 11053954 | Feb 2005 | US |
Child | 13221625 | US |