This patent document contains information subject to copyright protection. The copyright owner has no objection to the facsimile reproduction by anyone of the patent document or the patent, as it appears in the U.S. Patent and Trademark Office files or records but otherwise reserves all copyright rights whatsoever.
Aspects of the present invention relate to network traffic management. Other aspects of the present invention relate to configurable, adaptive, global traffic control and management in networks such as the Internet.
As the volume of Internet traffic grows, providers of web content and applications increasingly need to deliver content from multiple servers at widely-separated locations in order to sustain a good end-user experience under high traffic loads. This need generates several difficult challenges, including, among others:
This invention solves these and other problems by providing a means to distribute network (e.g., Internet) traffic according to a configurable set of rules. The rules can be configured to take into account key factors such as:
These rules together provide an extremely fine-grained level of network Internet traffic control to providers of Internet content and applications, enabling them to dramatically improve the end-user experience (measured by speed of request resolution, associated download time, and the availability of servers) over that provided by conventional web servers and mirrored server farms.
There are many potential uses for the invention. One use is to provide a stand-alone service directing traffic exclusively to a set of designated servers managed by a single organization. The invention may also be used in more general ways—for example, one or more of the designated destinations can refer to servers (or server collections) outside the organization's control. The latter case includes, for example, Content Delivery Networks (CDNs), as well as local load-balancing servers, as potential destinations. The invention can also be used, e.g., to provide the DNS (Domain Name Service) component of a Content Delivery Network itself It can be deployed as a service on behalf of subscribers, or it can be deployed as software to be used directly by subscribers themselves.
The present invention is further described in terms of exemplary embodiments, which will be described in detail with reference to the drawings. These embodiments are non-limiting exemplary embodiments, in which like reference numerals represent similar parts throughout the several views of the drawings, and wherein:
a) is an exemplary flowchart of a process, in which a content delivery framework provides adaptive policy-based domain name service;
b) is an exemplary flowchart of a process, in which a domain name server resolves a hostname based on policies;
c) is an exemplary flowchart of a process, in which a monitoring mechanism of a domain name server dynamically monitors the name service operations as well as the availability and the load share status of servers;
d) is an exemplary flowchart of a process, in which an ATC administrative network maintains dynamic policies and monitors the operations of a content delivery framework;
e) is an exemplary flowchart of a process, in which an ATC network monitoring mechanism traps events from different domain name servers and generates alerts when necessary;
a) shows an exemplary secure web based graphical interface, through which a subscriber may define load share and shed fraction policies among static resources;
b) shows an exemplary interface for defining policies for dynamic servers;
c) shows an exemplary graphical user interface through which an overflow server may be defined using a canonical name;
a)-10(o) show exemplary subscriber policies; and
a)-11(c) show exemplary archived log information that can be displayed and viewed through a graphical user interface.
The invention is described below, with reference to detailed illustrative embodiments. It will be apparent that the invention can be embodied in a wide variety of forms, some of which may be quite different from those of the disclosed embodiments. Consequently, the specific structural and functional details disclosed herein are merely representative and do not limit the scope of the invention.
Although there are a number of different scenarios in which the invention might be deployed, this description will focus, for clarity and example only, on a scenario in which DNS service is provided by a third party on behalf of a content or applications provider.
As used in this description, the framework according to embodiments of this invention contemplates three groups of users:
An end-user who wishes to access content and/or applications over the Internet.
An operator who offers and supports the service on a managed network of computers.
A subscriber (typically a content or applications provider) who subscribes to the operator's service to help to support the needs of the end user.
The terminology end-user, operator and subscriber are used throughout this description to distinguish these three roles, although there are many scenarios in which more than one role can be taken by a single entity. Such scenarios are contemplated by this invention.
Each server group 106 may correspond to a server hierarchy which includes one or more tiers of servers. For example, the first tier of a server hierarchy may comprise one or more primary servers and the second tier of the server hierarchy may comprise possibly one or more overflow servers that are used when the primary servers in the first tier fail to function properly. In general, for the j-th server group, the primary servers at the first tier of server group 106-j are denoted 108-j, and the overflow servers at the second tier of server group 106-j are denoted 110-j.
Each server in a server group is any process or collection of processes that provide resources in response to requests, e.g., from a client. A server can be any off-the-shelf Web server. In some embodiments, servers are typically a Web server such as the Apache server or Netscape Communications Corporation's Enterprise™ server.
Client 112 accesses the subscriber server network 104 in order to obtain content from the subscriber. Content includes any kind of data, including, without limitation, video and audio data and the like. To achieve this access, a user at client 112 enters a resource locator, e.g., a Universal Resource Locator (“URL”), into a browser 114 on client 112. URLs specify the location of resources (information, data files, etc.) on the network. URLs are defined in detail in T. Berners-Lee et al, Uniform Resource Locators (URL), Network Working Group, Request for Comments: 1738, Category: Standards Track, December 1994, located at “http://ds.internic.net/rfc/rfc1738.txt”, which is hereby incorporated herein by reference. URLs generally have the following form:
The framework or system 100 includes at least one Domain Name Service (DNS) name server 118-1. In preferred embodiments, the system 100 also includes DNS name servers 118-1, 118-2, . . . , 118-n, (collectively referred to as name servers 118) all operated by a single particular entity. In the embodiment shown in
When the client's browser 114 obtains a request (e.g., in the form of a URL), the browser queries its resolver 116 for an address for the hostname specified in the requested URL. The resolver 116 eventually queries a particular name server (e.g., name server 118-1). The name server 118-1 returns (provides or attempts to provide) the IP (Internet Protocol) address (or addresses) of a server (or servers) in the subscriber server network or a CNAME (a domain name). Other DNS answer types (e.g., MX record, NS record, etc.) are also possible, as described below. The determination of the particular IP address returned to the resolver 116 may be based on a number of factors, including the resolver's location (e.g., as determined from the resolver's IP address) and various policies (e.g., subscriber policies 120, other policies 122) in a policy database 124. The client's browser 114 is then able to communicate with the selected server in the subscriber server network in order to obtain the desired resource.
Name servers 118-1, 118-2, . . . , 118-n, according to embodiments of the present invention, include an adaptive traffic control (ATC) mechanism 126 which provides domain name service to a client based on policies in the policy database 124. Each name server 118 includes or has access to a location determination mechanism 128 for associating the client's request with the client's location. Any program that can determine a requestor's location may be used.
The subscriber server network 104 represents a network of servers that provides, on behalf of an underlying subscriber, Internet content or services. For example, a subscriber may be a content provider, which has its own network of servers that deliver content to end users' browsers via the Internet. The subscriber server network 104 may be configured in such a way that the processing of the service requests may be reasonably distributed among all the servers in the server network 104, according to some criteria. To achieve that distribution, the subscriber server network 104 may be organized, for example, to have one or more server groups (e.g., server group 1106-1, . . . , server group k 106-k), each of which may be responsible for processing a portion of the service requests. For example, service requests coming from Finland may be routed to a server group that is physically located in Europe.
Each server group 106 in the subscriber server network 104 may comprise a plurality of servers to further share the load. For example, a service request from Finland may be subsequently routed to a server located in Scandinavia. Routing service requests to different server groups and subsequently to different servers may be based on various criteria. For example, such routing may be based on the distance between the origin of the request and the location of the server. For example, for a service request originated from Finland, it may be more effective and efficient to direct the request to a server located in Norway instead of directing it to a server located in the U.S. Server load may also be used to determine where to route a service request. For example, a service request originated from the U.S. may be routed to a server group in Mexico if the server group in the U.S. is overloaded while the server group in Mexico is relatively idle.
Servers in a server group may be organized into a hierarchy with one or more tiers of servers. Servers at different tiers may have different designated purposes. For instance, servers at the first tier of a server hierarchy may include servers that are primary functioning servers, servers at the second tier may include servers that are used as overflow servers which become active only when the primary servers at the first tier become unavailable or overloaded, and servers at the third tier that are used as second layer overflow servers that become active only when the primary servers and the overflow (or first layer overflow) servers at the second tier become unavailable or overloaded, etc.
The first tier of a server group hierarchy may include one or more primary servers. When multiple primary servers are present, they may be configured in such a way that they share the load. For example, if there are five primary servers at the first tier, they may be configured so that each takes 20% of the total service requests routed to the server group. For each of such primary servers, it may be further configured so that when a particular server is overloaded or fails, the load share originally designated to this server may be shed or directed to other server(s). The load shed may also be configured when the server is not overloaded. The portion to be shed to other server(s) may be governed by certain pre-defined policies.
As discussed above, servers in a server group 106 may be classified into different categories based on specific functionality of the underlying servers. For example, a server in a server group 106 may be a primary server, a first layer overflow server, or a second layer overflow server, etc. A server may also be classified in terms of whether and how a server is to be dynamically monitored. For instance, in some embodiments, a server can be classified as either a monitored server or a managed server.
A classification of a monitored server indicates that the underlying server is to be monitored dynamically for its availability. In this case, the server may be probed for its availability according to some pre-determined schedule. Such a probe may be sent to the underlying server from different locations of the network so that the availability can be detected accurately. That is, if the underlying server failed to respond to one probe sent from one location, it does not necessarily indicate that the server is no longer available (could be due to that only part of the network is congested).
A probe can be realized in different fashion. It can be simply a signal sent to the server to request an acknowledgement. It can also be a poll operation in which a file stored at a designated location on the underlying server is polled. If the file can be successfully polled, the underlying server is considered to be available. The detected availability may be used to adjust or update policies associated with the server so that the network traffic management will respond to the dynamics related to the server.
A server that is classified as a managed server may be actively participating the adaptive policy-based management scheme. Similar to a monitored server, a managed server may be regularly probed for its availability. In addition, a managed server may dynamically provide information related to its load share or load shed. A managed server may update its load share or load shed fraction according to its changed capacity or its current load. For example, when a managed server is upgraded, its capacity may be increased so that it may increase its load share or decrease its load shed fraction. On the other hand, when a managed server is overloaded, it may revise its load share to a lower level or increase its load shed fraction to prevent failure. A managed server may revise its load share or load shed by updating the corresponding load share or load shed information in a designated file stored on the server and this file may be polled by a name server so that the dynamically changed load share and load shed information can be used to direct traffic accordingly.
The distribution of service requests (from a client 112) within the subscriber server network 104 may be controlled through a set of ATC policies (120, 122) stored in the policy database 124. Various authorities may influence the ATC policies and may create policies in the policy database 124. For example, the subscriber may set up policies to direct traffic with respect to considerations such as the geographical locations and the capacities of the underlying servers in the subscriber's network 104. Other policy entities 138, including, for example, the operator of the name servers 118 or various geo-political entities may also have policies regarding how the network traffic should be managed and directed. For example, governmental or some organizational agencies may regulate some aspects of network traffic policies. Such regulation policies may be required to be incorporated so that a service request from the client 112 can be routed in a manner that satisfies regulatory policies.
Thus, the ATC policies in the policy database 124 may be a combination of subscriber policies 120 and other policies 122 from different sources such as subscriber 102 and policy entity 138. Policies from different sources may be accessed by the ATC mechanism 126 from the policy database 124. From the point of view of the ATC mechanism 126, the source of a policy may not be relevant or even determinable. For example, policies may be defined in a textual file stored at a designated location, which may be downloaded to an ATC policy management mechanism 152 in an ATC administrative framework 142 and then broadcast to the database manager 132 located in each of the name servers in the ATC name server network 140. The download may be via either a graphical user interface (GUI), a file transfer protocol (FTP), or some other mechanism. Policy makers may also enter policies directly via a web-based GUI. For example, the subscriber 102 may enter subscriber policies 120 via a browser interface 156 connected with the ATC administrative framework via, preferably a secure interface (e.g., implemented using the “https” protocol).
The policies from the policy database 124 used by the ATC mechanism 126 are collectively referred to as ATC policies, which may include, not is not limited to, the subscriber policies 120 as well as other policies 122. The ATC policies may be organized in a manner that is appropriate to govern and/or control the traffic at different levels of the subscriber server network 104.
The ATC policies (200) may be classified into different types such as geo-political policies 215, load share policies 240, failover policies 245, tiered failover policies 250, shedding policies 255, regulatory policies 260, and Classless Inter-Domain Routing (CIDR) block policies 210.
These exemplary policies in the policy database 124 are described in greater detail below:
Geographic Policy (215): Decisions are based on location of the end-user or an approximate thereof, e.g., using the IP address of an end-user's resolver. For example, if the IP address is within the U.S., the request may be directed to a “domestic” group of servers in the subscriber server network 104, otherwise it may be directed to “international” servers in the network. The request may be directed to a proprietary Content Delivery Network (CDN) or to another service specified by the subscriber.
Load Share Policy (240): The subscriber can explicitly specify the amount of traffic (load share) to be directed to each of their servers within a defined server set in the subscriber server network 104. Typically the load may be specified according to the capacity of each server. Based on such specification, the load share of each server may be derived as a percentage of the total load.
Failover policy (245): The subscriber may specify policies regarding a failover situation where some of the primary servers fail to function. To take care of such a scenario, a failover policy may instruct the ATC mechanism in terms of, for example, how often to monitor the availability of the servers and what strategy to adopt when a partial set of the primary servers are detected to be unavailable. For instance, a failover policy may specify to distribute the load of a failing server to other primary servers. It may also alternatively instruct the ATC framework to direct traffic to servers other than the primary servers.
Tiered Failover Policy (250): The subscriber may specify a strategy through tiered failover policies by which the load should be re-directed to servers at a next tier when one or more subscriber servers in a previous tier fail to function. For example, in defining the subscriber server network, each server group may be configured as a hierarchy, having the first tier of primary servers, the second tier of first layer overflow servers, and the third tier of second layer overflow servers, etc. In this case, a tiered failover policy may be defined to indicate when the load should be re-directed from the primary servers to the overflow servers. For instance, a tiered failover policy may indicate that when all the primary servers fail, the load should be directed to the second tier, and when servers at both the first tier and the second tier fail, the load should be directed to the overflow servers at the third tier. The traffic may also be re-directed to some other servers. For example, when a content delivery network (CDN) is available, the traffic may be re-directed to the entire CDN.
Shedding Policy (255): In some circumstances, a fraction of the “load” originally designated to a server may be shed or re-directed to one or more different servers. This may occur when the amount of traffic directed to the subscriber server exceeds a prescribed level. In these cases, a subscriber-specified fraction of traffic (shed fraction) that would otherwise be directed to the server may be shed to one or more other different servers. Such strategy may be adopted to prevent catastrophic failure due to overload. The servers that take the shed load may be an overflow server or some other servers such as a content delivery network. An overload situation may be detected according to the response time of the server. For example, if the response time from a server becomes long, it may indicate that the server is overloaded. In this case, shedding policies 255 may be invoked to re-direct the traffic elsewhere. For instance, if there are a total of 3 primary servers in a server group with load share of (0.3, 0.3, 0.4) and the primary server that is designated to take 40% of the total load is completely overloaded, a shedding policy may specify to shed the load of this server by re-directing 50% of its original load to, for example, the servers located in the CDN 105. A shedding policy may also specify a condition upon which the traffic will be directed again to the shedding server. Such a condition may relate to a desirable level of performance of the shedding server.
CIDR Policy (210): Policy decisions are supported based upon CIDR blocks of IP address space. CIDR denotes Classless Inter-Domain Routing, an IP addressing scheme that replaces the system based on classes A, B, and C. With CIDR, a single IP address can be used to designate many unique IP addresses.
Regulatory policy (260): Certain policies may be specified by some policy entities to control network traffic. Such policies may be enforced in system 100.
A policy may be static or dynamic. Selection of a server from the subscriber server network 104 may be based on an adaptive, regularly updated map of the state of the Internet as well as adaptively updated policies. The map may cluster IP addresses together according to their network latency to a selected set of network agents. This enables the subscriber servers to be selected according to their “network proximity” to an end user's browser, optimizing resulted download time. The policies may be defined in such an adaptive manner that they reflect the dynamic status of the servers such as the availability and load.
Geo-political policies 215 may govern the selection of a server according to where the client is located. As shown in
Geo-political policies differ from load based policies (described above). The former is designed to guide selections based on geographical criteria or time criteria. The latter concerns the selection process with respect to the dynamic capacity and load of the underlying servers. The load share policies 240 govern the selection according to the capacities of the servers. Partial failover or tiered failover policies (245 and 250) govern the traffic re-direction process when functioning servers in a server group are overloaded or failed.
The ATC policies 200 may be hierarchically constructed to form a decision tree.
As noted, a policy may be created by generating a decision tree representing the ATC rules that the user (content provider/subscriber) wants to apply. At the leaves of this tree (referred to as resource nodes) a user specifies the answers (IP addresses or CNAMEs) that the user wants ATC to provide in response to a DNS request. At the branches of this tree (referred to as branch nodes) the user specifies the various decision criteria that the user wants to apply. Those skilled in the art will realize and understand that, in this discussion about policy setting and control of the ATC, the term “user” (as opposed to “end user”) generally refers to the content provider/subscriber whose content is being delivered.
Branches within a policy decision tree enable ATC decisions based, e.g., upon the IP address of the requesting resolver. Branches can be selected based upon different criteria. In a presently preferred exemplary embodiment, branches can be selected based on five different criteria:
Those skilled in the art will understand, upon reading this description, that different and/or other criteria may be use to make branch selections. Examples of such criteria include, without limitation, time of day, day of week, etc.
Resources are selected to occupy the leaves of a policy decision tree. Resources are used to specify the ATC answers to DNS queries. ATC offers substantial decision making power during the resource selection process. It is here, for example, that traffic can be distributed according to load share and shed fraction criteria. In a presently preferred exemplary implementation there are five types of resource nodes that can be selected in the policy decision tree:
Those skilled in the art will understand, upon reading this description, that different and/or other resource nodes may be used.
IP address(es) are used when the current decision tree leaf points to one or more fixed IP addresses. CNAME records are used when the current decision tree leaf points to a destination whose CNAME is known.
MX Records are used when the current decision tree leaf points to one or more MX (Mail Exchange) records. MX records may be combined with IP address or Load Sharing Server Set resources. To select this option, the user firsts enters the Exchange, Pref, and TTL (Time To Live) details for the mail exchange. The Exchange field holds the domain name of the host which will receive mail. If the Exchange name lies under the Domain Name Origin, then the policy must specify an IP address for the name by means of another node label and appropriate resource (IP address or Load Sharing Server Set). A Pref field holds the preference order of the host; lower numbers are preferred (used first) over large numbers.
NS Records are used when the current decision tree leaf points to one or more NS (Name Server) records. NS records are used to specify a delegation from the ATC name servers to another set of name servers.
Load Sharing Server Set (LSSS) option is used when the current decision tree leaf points to a number of destinations among which the user wants to share load (including failover) according to arbitrary criteria. In presently preferred exemplary implementations, this decision takes place only after any decisions that the user wants to make via branch nodes based on the IP address of the requesting resolver. There are presently four different types of load sharing server resources, namely static servers, managed servers, dynamic servers, and overflow servers.
In a present implementation, managed Servers are preferred over the other server types.
Load sharing server sets (LSSS) are necessary to utilize the load share, tiered/overflow and failover rules. When an ATC name server encounters a server set, one or more answers are chosen based on several criteria, including, e.g., the load share of each server, the tier in which each server sits, the shed fraction of each server, and the monitoring results of each server.
Static, dynamic and managed servers have load share, shed fraction and, when in a set, tier parameters. An overflow server is, by definition, at the lowest tier and cannot shed; thus it has only the load share parameter. Dynamic and managed servers each may have the load share and shed fraction values updated continuously and may be deemed on or off-line due to monitoring.
When answering a DNS query from a LSSS, ATC preferably selects answers (IP addresses or a CNAME) from servers included in the set. ATC examines servers in tier order (tier 1 first, tier 2 next, and so on, with overflow servers last), selecting servers based on load share, shed fraction and monitoring results. Online servers are selected within a tier according to their load share values relative to other servers in the same tier. If a candidate server is shedding, e.g., 10% (i.e., has a shed fraction of 0.1), ATC will keep that server in the DNS reply nine tenths of the time, and look in the next tier for a server one tenth of the time. If there is only one tier, the shed traffic returns to the same tier and another server is selected. If the LSSS specifies that multiple answers should be returned, ATC will look for each answer starting from the first tier. If any server selection falls to the overflow servers, one of these is selected as the sole answer to the DNS reply. Typically, the overflow server is a CNAME, and DNS answers of this type should not be combined with any other record type.
In a presently preferred implementation, the number of answers chosen from Load Sharing Server Sets depends on two more user-settable parameters, the “number of answers desired” (NAD) and the “number of answers required” (NAR). These parameters control how many answers are picked from non-overflow servers.
The user sets the NAD parameter to the typical number of answers to be selected by ATC and returned in DNS queries. The NAR parameter, which must be less than or equal to the NAD, comes into play when the normal ATC decision making process cannot select the desired number of servers specified in NAD. This server count shortfall situation can arise when one or more servers are shedding or offline. Setting NAD to one (1) or more guarantees that ATC will never reply to a DNS query with no servers. When NAD is greater than one, ATC may deviate from the desired load share ratios, e.g., if there are three servers and NAD is three then DNS replies will typically have all three servers, irrespective of their load share values.
If the number of servers selected is less than NAD, but greater than or equal to NAR, then server selection is finished. However, if the number of servers selected is less than NAD and less than NAR, then ATC progressively ignores shed fraction and load share values until it is able to select NAR servers. As a special case, if ATC cannot select any servers because they are all offline, all servers are returned in the DNS reply (subject to size constraints of the DNS packet).
As an example, consider a LSSS with two servers in tier 1, two servers in tier 2, and two answers desired and required (i.e., NAD=2, NAR=2). If both servers in tier 1 are online and not shedding, those servers will be returned. If one of the tier 1 servers has 10% shed, then 10% of the answers contain one server from tier 1 and one server from tier 2. If both servers from tier 1 are offline, the two servers from tier 2 are returned. If all four servers are offline, then all four are returned (the special case).
In some implementations, the LSSS may also have a Style parameter which may be used to control where traffic goes when it is directed away from a shedding server, or from an offline server, according to the following table.
Consider, e.g., a LSSS which contains a few servers in Tier 1 and an overflow server sending traffic to a CDN. The three styles above relate to three underlying motivations for how to split traffic between the Tier 1 servers and the CDN.
If the Tier 1 servers should always take a specified percentage of traffic, as a whole, then the Constant Tier Load Ratio style is best. For example, for the CDN to always take 50% of the traffic, each of the Tier 1 servers would have their shed fraction set to 0.5. This way, even if servers are added or removed from Tier 1, that tier and the CDN will always get 50% of the traffic. This, of course, means that the load on the Tier 1 servers will vary as servers are added or removed.
If, instead, it is not acceptable for the Tier 1 servers to increase their load upon the failure of another Tier 1 server, the Constant Server Load Ratio style is best. With this style, both shed traffic and traffic that would have gone to an off-line Tier 1 server are directed to the next tier (in the example above, the CDN). This is a preferable scenario if the Tier 1 servers are generally under high load. Lastly, if the Tier 1 servers are generally under low load and the CDN is to be considered a last resort, the Maximal Tier Load style should be used. With this style, both shed traffic and traffic that would have gone to an off-line Tier 1 server are instead directed to other servers in Tier 1. Traffic only gets to the CDN if all servers in Tier 1 are offline or shedding.
The Static Server option is used to share load at the current decision tree leaf among a set of servers that the user does not expect to change often over time. The load share for a given server represents the amount of traffic that will be sent to that server within the same tier of the server set. The fraction of traffic sent to that server is computed by summing the load share numbers for all servers in the same tier of the server set and dividing the load share by that total. The shed fraction for a given server represents the fraction (from 0 to 1) of traffic that should be redirected away from a given server, after the initial decision based upon load share. The online flag can be used to remove a server from the list of ATC answers if the user wishes to take a server off-line for maintenance. (If a server has been switched off-line, end users may continue to access it for the duration of the TTL, or perhaps longer if the end user's resolver or browser ignores the TTL.)
In addition to default values for load share and shed fraction, the managed server may optionally be monitored to determine the on/off-line status, and polled for load feedback information (updated values for the load share and shed fraction). Furthermore, a managed server may hold either an IP address or a CNAME. A managed server holding an IP address without monitoring or load feedback becomes equivalent in functionality to the static server. Managed servers with monitoring only (load feedback not configured) are used when the user expects the load share and shed fraction of the servers to change infrequently, but want an instant reaction if the user's server becomes unavailable. In a presently preferred implementation, ATC monitors the user's server by polling a user-specified resource (denoted by a user-specified URL) at regular intervals (e.g., every 30 seconds) from each ATC name server. The URL scheme may be, e.g., HTTP, HTTPS or TCP. With HTTP(S), ATC name servers independently send a HEAD request, optionally with HTTP's basic authentication, and examine the response code. Response codes from 200 to 399, received within 15 seconds of the probe, put the server in an online state as viewed by the probing name server; all other conditions (connection refused, server 400 or 500 error, 15 second time, etc.) lead to an offline state for the server until the next probe.
Alternatively, ATC can monitor the user's server by checking the ability to connect to a specified port. In this case, the URL format is tcp://HOST[:port], and the ATC name servers simply attempt to establish a TCP connection to determine the online state of the user's server. The same probing rate and timeout durations apply.
Suppose, for example, that a user wishes to use ATC for name service for download.example.com. Suppose too that the user has two servers and wants ATC to monitor them both and only return the IP address of an available server. In this example, the user could define two Managed servers, each with a Monitored URL as shown in the following table:
The Monitored URL can point to any resource available to the web server, in this case the web server's root. When these servers are used in a server set, ATC continuously polls the monitoring URLs to determine the on/off-line state of the user's servers.
A user may specify the load share and/or shed fraction of the user's servers dynamically, using a Load Feedback URL to specify an XML resource file (at a location of the subscriber's choice) in the following form (preferably plain text):
The file must be made accessible by the subscriber via http or https, optionally with basic authentication. In a presently preferred implementation, the onLine field is required, and all others are optional. LS is a number greater than (or equal to) zero, SF is a number from 0 to 1, OL is either True or False, and ET is a time (either absolute GMT or relative).
The expireTime parameter is the absolute time at which the current XML file expires, or the relative amount of time, post-probe, at which the current XML file expires. After a retrieved XML file expires, the loadShare and shedFraction return to their default values. In this manner a user (i.e., content provider) can temporarily (or permanently) override default load share and/or shed values.
The Managed Server specification with both Monitoring and Load Feedback may then look like the following table:
The load feedback resource does not need to be on the server itself. The user may wish to use an independent server to control the load share and shed fraction of some or all of the user's servers. The state of the user's managed servers is maintained by each ATC name server as follows:
If the user's server uses neither a monitored URL nor a load feedback URL, it is considered to be always on-line with default load share and shed fraction.
If the user's server uses a monitored URL but not a load feedback URL, it is considered to be on-line when the monitored URL is successfully retrieved (HTTP response code from 200 to 399 within 15 seconds), otherwise it is considered to be off-line.
If the user's server uses both a monitored URL and a load feedback URL, the server status is preferably given by the following table:
In the above table, “No XML” means that either the XML file could not be retrieved within the 15 second time-out period (e.g., due to network error), the XML file was retrieved but could not be parsed, or the XML file had expired. “Last known” refers to the full contents of the last successfully retrieved XML file. Last known values expire and cease to exist at the expire time if the expire time is specified in the XML file. The term “defaults” refers to the default load share and shed fraction values specified in the Managed resource server.
If the user's server uses the Load Feedback URL but no Monitored URL, the user can achieve an N minute delayed failover by putting in the XML snippet a relative expireTime of N minutes (+Nm) and a loadshare of one, while having in the policy a default Load Share of 0.
Dynamic Server
The Dynamic Server option may be used to share load at the current decision tree leaf among a set of servers when the user expect that the load share, shed fraction, or server availability may change reasonably often. (The managed server option is preferred over the dynamic server option.) Load share and shed fraction are defined for this case as they are for Static Servers. However, the Dynamic Servers selection offers flexibility in exactly how they are specified. The Dynamic Server option supports specification of load share, shed fraction and the On line flag dynamically, allowing them to be defined by the subscriber via a short XML status file at a location of the subscriber's choice (as described above).
The subscriber specifies the location of this file to ATC, including any authentication requirements desired. ATC polls the file at predetermined fixed intervals, e.g., every 30 seconds, and updates the current ATC policy to reflect its contents. Subscribers can easily specify changes to load share, shed fraction and the On line flag by editing the file, either manually or by updating it using automated modules, without requiring updates to an entire ATC policy.
In order to maintain flexibility, values for the “URL Down load share” and the “URL Down shed fraction” parameters can be set in the ATC policy. They are used to control the behavior of ATC under the circumstance that the ATC polling mechanism is unable to retrieve the status file for a server, or is unable to parse its contents.
Case 1
Subscriber specifies all three ATCServerStatus attributes in the status file, with the “URL Down load share” parameter set to zero. In this case the subscriber's server will not be returned as an answer under any conditions until the polling mechanism retrieves a valid XML file.
Case 2
Subscriber specifies all three ATCServerStatus attributes in the status file, with the “URL Down load share” parameter set to one. In this case, ATC treats a failure to return valid XML according to the “Down load share” and “Down shed fraction” specified in the policy.
Case 3
Subscriber specifies only the on line ATCServerStatus attribute in the status file. In this case, ATC uses the “Default load share” and “Default shed fraction” specified in the policy if the “On line” flag in the ATCServerStatus file is set to true.
Below is a table detailing all of the possible cases. In the table, a dash represents ATC taking the dynamic server out of consideration. LS and SF should be replaced by actual numbers.
Dynamic Server Cases
Overflow Server
A user uses the Overflow Server option if the user would like to specify what should happen if, for some reason (e.g., server unavailability), no answers are selected by the Static or Managed (or Dynamic) Server selection process. Typically, the overflow server will be a CName to a service, such as Level 3 Communications' Content Delivery Network (CDN), that should be available in the case that the user's individual Static or Managed (or Dynamic) servers are unavailable.
Managed and Dynamic Server Status Alerts
The global ATC network preferably maintains an overall state of a user's Managed and Dynamic servers using Monitored or Load feedback URLs. A Server Status page, available from the subscriber web interface, continuously updates with the status (up or down) of the user's URLs and, if down, the relevant error condition (time-out, connection failure, file not found, etc.). Additionally, the overall server state is displayed (either on-line or off-line) along with a relevant status message. If a server changes state, an email message is sent to the user detailing the event. Email messages are preferably not sent more frequently than once in 15 minutes per domain name origin.
Once a DNS request is directed to an appropriate continent, the country based distribution policies 225 may further constrain the selection to particular servers that are in the same or close-by countries where the client 112 is located. For example, if the client 112 is located in the U.S., the country based policies 225 may direct the selection from the servers located in the U.S. Similarly, the region based distribution policies 230 may further constrain the selection to, for example, the west coast or east coast depending on where the client 112 is located.
When a particular server group is selected (e.g., after a hierarchical decisions based on the continent-based policies 220 and the region-based policies 230), the load share policies 240 govern the process whereby servers in a given server group should be selected. Such policies may be determined based on the servers' capacities or may be adaptively revised based on the dynamic performance or load of the underlying servers. The load share policies 240 may specify the percentage (share) of the total requests that each server in a server group should handle. For example, if a server group comprises a total of three primary servers (server 1, server 2, server 3), a load share policy for this server group may specify the load share as (0.3, 0.5, 0.2), indicating that server 1 should take 30% of the total load, server 2 should take 50% of the load, and server 3 should take 20% of the total load.
The tiered failover policies 250 govern the selection of a server when a particular default set of servers is no longer functioning or available. For example, primary servers in a server group may be considered as a default set of servers that provide service when operation is normal. Unavailability of such primary servers may be detected according to the response time of the server. For example, if a server is simply not responding, the server may be considered as not available. When all the primary servers are down, the tiered failover policies 250 govern where the traffic should be directed. For instance, the tiered failover policies 250 may specify to direct all traffic to the overflow servers at the next tier.
The subscriber policies 120 govern selection of one or more servers within the subscriber server network 104 (or in some other network of servers) so that their IP addresses may be returned in response to DNS requests from the client 112 (resolver 116). To determine or to select appropriate servers in the subscriber server network 104, the decisions may be hierarchical. For example, as shown in
As illustrated in
The ATC name server network 140 is responsible for responding to DNS requests, including processing DNS requests, applying the ATC policies 200 to select one or more servers from the subscriber server network (or a different designated network), and replying with the IP address(es) of the selected server(s). The ATC name server network 140 dynamically maintains the ATC policies 200, that are either received from the ATC administrative framework 142 or updated according to the dynamic operational status of the servers. The ATC name server network 140 provides domain name-IP address resolutions based on dynamically updated ATC policies 200. The ATC name server network 140 may also monitor the operational status of individual name servers within the network and supply logging and monitoring data to the ATC administrative framework 142.
The ATC administrative framework 142 may be designed to have fault-tolerance. For example, as depicted in
The ATC name server network 140 may comprise a plurality of name server agents 118-1, 118-2, . . . , 118-k, each of which may be designed to be responsible for the DNS requests of a particular geographical (or any other administrative or functional) region. For example, the name server agent 118-1 may be responsible for processing all the DNS requests from North America, the name server 118-2 may be responsible for Europe's DNS requests, and the name server 118-3 may be responsible for DNS requests from Japan. In addition, a name server agent may also serve as a back up name server agent for other name server agents in the network. For example, if the North America name server agent is not functioning properly, the Europe name server agent may be temporarily assigned to handle the DNS requests from North America. For that purpose, all ATC policies may be propagated to all of the name server agents in the ATC name server network 140.
The ATC policy database 124 stores the ATC policies 200. As discussed above, the ATC policies 200 may include policies from different sources (e.g., from subscribers and from other entities that control network traffic). Both the subscriber policies 120 and other policies 122 may be defined and provided to the AMA 144. In the exemplary configuration shown in
The ATC policy management mechanism 152 may broadcast or propagate the ATC policies 200 to relevant name servers in the ATC name server network 140 so that the ATC policies 200 may be used to control the domain name translation service. The stored ATC policies may be dynamically updated via different means. For example, the policies may be updated through the ATC administrative browser 166, or the administrative policy update mechanism 164 may revise existing ATC policies.
Alternatively, the AMA 144 may also be provided with policies from different sources through the ATC policy management mechanism 152, which may regularly poll dynamically updated policies from different locations. Such locations may include designated network locations that are designated to provide dynamic policy related information or servers (either in the subscriber server network 104 or in the CDN 105) that are classified as managed servers. For instance, a server may dynamically specify its load share via a designated file stored on the server or at some other location (the file may reside on the server being monitored or at some other location, e.g., a central load controlling server). To retrieve such dynamically defined load share information from a managed server, the ATC policy management mechanism 152 may poll the designated file stored on the managed server to obtain relevant load share information. Dynamic policies may also be polled from other policy making entities.
Broadcasting ATC policies may take place periodically according to some pre-defined interval or may be triggered whenever the stored ATC policies are updated. The ATC policy management mechanism 152 may monitor changes made to the existing ATC policies. The ATC policy management mechanism 152 may poll the ATC policies stored in the ATC policy database and see whether there are changes. On the other hand, whenever the ATC policy management mechanism 152 polls dynamic policies from specified locations (such locations may be specified in existing ATC policies), it may determine whether the dynamically polled policies differ from existing ATC policies. In the event that updated policies are different from the existing ATC policies, the ATC policy management mechanism 152 may re-broadcast the updated ATC policies to the ATC name server network 140.
The ATC network monitoring mechanism 150 may collect DNS log summaries from different name servers in the ATC name server network 140. Such summary log data may be received in the form of events that provide information such as, for example, the number of requests directed to particular servers in a given time period. The ATC network monitoring mechanism 150 may collectively processes such DNS log summaries (or events) from the entire ATC system. The report generation mechanism 168 may generates monitoring status reports from these summaries and makes such reports available to the subscriber 102 via the secure web-based GUI 160.
The administrative master backup mechanism 170 may periodically update the AMA backups 146-1, . . . , 146-m to ensure that all the backup agents are current. This may include replicating the ATC policies, the operational status of various control mechanisms (including the ATC policy management mechanism 152), the policy editing mechanism 162, and the administrative policy update mechanism 164, and providing the up-to-date information to the AMA backups.
A major function of an administrative master agent is to manage the ATC policies, to make sure that updated ATC policies are supplied to the name server agents in the ATC name server network 140, to monitor the various name servers' performance, to generate dynamic monitoring status report of system performance, and to maintain a connection through which policies may be updated dynamically and monitoring report can be examined.
Information flagging system errors and other anomalous conditions is collected by the ATC network monitoring mechanism or agent 150.
The trap handler 176 further comprises an event receiver 178 that intercepts trap events from the name servers and an event consolidation mechanism 180 which may classify the trapped events and organize them in a reasonable and appropriate fashion. The processing mechanism 182 may process the consolidated events to identify useful or informative patterns which may be further used, by the alert generation mechanism 184 to identify problematic patterns which may significantly affect the system performance.
The Adaptive Traffic Control (ATC) framework according to the present invention may be deployed as a stand-alone service directing traffic solely to the subscriber's servers, in conjunction with another content delivery network (CDN) provider, or in conjunction with any other service.
Each domain name server in the ATC name server network 140 may include (
The database manager 132 maintains the policy database 124. It received policies broadcast from the ATC policy management mechanism 152 in the ATC administrative framework 142 and populates the policies in the policy database 124. The ATC policies received from the ATC administrative framework 142 may also include information that defines or classifies servers in the subscriber server network 104 (or in the CDN 105). For example, some servers may be defined as monitored servers and some may correspond to managed servers.
When the database manager receives such information, it may inform the monitoring mechanism 130 of the classification of the underlying servers so that the monitoring mechanism 130 can monitor each server according to its status. When the ATC policy management mechanism 152 broadcasts updated policies, the database manager 132 accordingly updates the relevant policies stored in the policy database 124.
The monitoring mechanism 130 monitors the operational status of the name server 118-1 and one or more servers in the subscriber server network. It may collect events occurred in the name server 118-1 during operations and send such events to the ATC network monitoring mechanism 150 in the ATC administrative framework 142. On the other hand, it may also monitor the operations of various servers in the subscriber server group 104 (or in the CDN 105) according to how each server is defined (monitored or managed server).
If a server is defined as a monitored server, the monitoring mechanism 130 may dynamically probe the server (as discussed earlier) to determine its availability. If a server is defined as a managed server, the monitoring mechanism 130 may monitor its availability during operation. The monitoring mechanism 130 may also poll dynamic load share information from the server. When the monitoring mechanism 130 detects that a server is no longer available, it may inform the database manager 132 to create a local policy that indicate that the server is no longer available so that the ATC mechanism 126 can take into account when resolving a hostname.
When the monitoring mechanism 130 polls the dynamic load share information from the server, it may inform the database manager 132 to update the load share policies in the policy database 124 that are affected by the dynamics of the polled load share. For example, if three primary servers in a server group originally have load share (0.3, 0.3, 0.4) and the third primary server now changes its load share to 0.2, the database manager 132 may accordingly update the load share among these three primary servers into (0.4, 0.4, 0.2).
The report generator 134 generates reports related to the operations of the underlying name server based on log information 136 recorded. Such generated reports may be sent to a report consolidator 154 in the ATC administrative framework 142 so that reports from different name servers may be consolidated.
A subscriber may activate (turn up) the ATC system (DNS servers) in one of two ways: using a DNS CNAME or using NS delegation. Instead of using ATC to direct traffic for a single DNS hostname, the subscriber may have many different subdomains that it would like to direct to ATC. For example, the subscriber might want all downloads from dl.customer.com handled by ATC, together with all of its subdomains, but have all other domains that it controls, such as www.customer.com, resolved by its own name server:
In this case, instead of adding the CNAME record in the subscriber's DNS zone file, it simply delegates the dl.customer.com. name to ATC via NS records in its zone file.
The delegation (using NS-records) method is presently preferred as it is more flexible and offers all of the reliability, scalability and flexibility of ATC. After the initial contact for delegation, a properly operating resolver making occasional queries will not contact the subscriber's own name server or name servers again. The result provides much better name resolution performance since the unnecessary overhead of routing each fresh DNS request through the subscriber's name server is eliminated. The CNAME method keeps the subscriber's own name server in the loop. That is, whenever the CNAME TTL expires, client resolvers will return to the subscriber's name servers to refresh the record.
For both CNAME and NS delegation methods, the procedure to turn off the ATC switch is the same—the subscriber edits its DNS zone files to remove the delegation authority to ATC. DNS requests will continue to be served in accordance with the subscriber's defined ATC policies until the TTLs have expired on the appropriate delegations.
The System in Operation
a) is an exemplary flowchart of a process, in which a name server resolves a DNS request based on ATC policies, according to an embodiment of the present invention. First, a user enters a URL into the user's browser 114 (or into any application that accepts URLs as input and obtains the corresponding resource for the client) (at 802). The client's resolver 116 attempts to resolve the hostname of the URL in order to obtain an IP address of a server from which the resource identified by the URL can be obtained (at 804). The resolver 116 will be directed by the client's DNS (not shown) to a DNS Name Server 118 in the ATC name server network 134 (at 806). The resolver 116 provides the name server 118 with the hostname it is attempting to resolve.
The name server 118 receives the request to resolve the hostname (at 808) and determines one or more servers in the subscriber server network 104 or in the CDN 105 that can process the client's request according to the location of the resolver 116 as well as relevant ATC policies retrieved from the policy database 124 (at 810). Details of this operation are described with reference to
b) is a flowchart of an exemplary process, in which a domain name server selects one or more servers according to location of the client and relevant ATC policies. The location of the resolver 116 (or client) is first determined (at 816). Relevant ATC policies are then retrieved (at 818) from the policy database 124. One or more servers in either the subscriber server network 104 or the CDN 105 are selected according to the determined location of the client and the relevant ATC policies (at 820).
c) is a flowchart of an exemplary process, in which the monitoring mechanism 130 in a domain name server monitors the operations of the name server as well as one or more servers in the subscriber server network 104 or the CDN 105. Events occurring during domain name service are monitored (at 822). Such events are sent to the ATC network monitoring mechanism 152 (at 824). In addition, the availability of the one or more servers are also monitored (at 826). Furthermore, if any of the one or more servers is defined as a managed server (determined at 828), dynamic load share information is polled (at 830). Both the availability information and the dynamic load share information (if any) are used to update some ATC policies that are local to the name server (at 832).
d) is a flowchart of an exemplary process, in which the ATC policy management mechanism 152 dynamically maintains and broadcasts the ATC policies. Initially, policies from different sources are received (at 834) and broadcast to the name servers 118 (at 836). If backup is necessary (determined at 838), the ATC policy management mechanism 152 sends current policies to the master backup agents 146 (at 840).
The ATC policy management mechanism 152 also performs dynamic policy maintenance. It polls dynamic policy information (at 842) at certain defined intervals and uses such polled dynamic policy information to update existing policy (at 844). The updated policies are then broadcast to the name servers (at 846). If the updated policies need to be propagated to the backup agents (determined at 848), they are sent to the master backup agents (at 850).
e) is a flowchart of an exemplary process, in which the ATC network monitoring mechanism 150 monitors operations of the name servers and sends alert to the NOC 148 and the subscriber 102. Events sent from monitoring mechanisms of different name servers are trapped (at 852). Such trapped events from different sources are then consolidated (at 854) and processed (at 856). If there is any alarming situation (determined at 858), the ATC network monitoring mechanism 150 generates an alert (at 860). The generated alert is then sent to both the NOC 148 and the subscriber 102 (at 862).
Policy Administration
The ATC policies may be initially set up and later adaptively updated according to servers' dynamic operational status. The ATC policies may be formed via various means. The described approaches may also be applied to form other policies. A GUI approach or a file-based approach (or both) may be employed to set up subscriber policies. Through the GUI approach, different means to form subscriber policies may be adopted. For example, a browser may be used so that a subscriber can enter policies directly. An XML file containing descriptions of the subscriber policies may also be loaded using GUI approach so that the policies may be parsed and stored. As another alternative, a file containing descriptions of the subscriber policies may also be loaded in a similar fashion and parsed accordingly. When a file based method is used, a file containing descriptions of the subscriber policies, either constructed based on XML or some other structure, may be accessed via other means such as FTP.
In some preferred embodiments, a subscriber accesses a secure web-based GUI 160 (
a)-9(c) show exemplary secure web based graphical interfaces, through which a subscriber may define load share policies and overflow policies with respect to specified network resources, according to embodiments of the present invention. There may be different types of load sharing servers: (1) static servers, (2) dynamic servers, and (3) overflow servers. A dynamic server is one that is specified as either a monitored or a managed server whose dynamic availability may be monitored and whose load may be re-directed when it becomes unavailable. In the case of a monitored server, both of its load share and load shed fraction may also be made dynamic.
Correspondingly, the policies governing routing requests to different types of servers may also be defined accordingly. First, the policies that govern static servers are applied when servers are relatively static and do not change often over time. This may mean that the availability of the servers is fairly stable and load sharing among different servers is also relatively stable. The policies that control dynamic servers are applied when servers are expected to change frequently. Such policies include failover policies, shedding policies, and tiered failover policies (described earlier). The overflow policies control the change of flow of the requests when, for some reason, primary servers, either static or dynamic, become unavailable or overloaded. In this case, relevant overflow policies determine to which overflow server a request for name service should be directed.
The load share with respect to a given server represents the amount of traffic that will be sent to that server within a static server setting. An exemplary method to compute the load share of the traffic sent to the server is to sum the load share amount of all servers in a current static server setting and divide the load share for that server by the summed total to derive the load share.
The shed fraction for a given server represents the percentage of traffic that should be redirected away from the server. The redirection may be carried out after load share is made based upon load share policies.
Shedding fractions may also be computed automatically on the fly. For example, when a server is detected to have slow response, indicating that it may be overloaded, a shedding fraction may be computed according to the discrepancy between its expected response time and the actual response time. Such an automatically computed shedding fraction can then be applied to re-direct the newly computed fraction of the traffic to a different server to unload some of the traffic originally intended to be directed to the overloaded server.
For each resource server, an on-line flag may also be set to indicate whether the server is currently available. This flag is shown in
Since managed and dynamic servers are expected to change relatively frequently, policies that govern their selection may be defined in two stages. The policies for managed and dynamic servers may be initially specified in a similar fashion as for static servers. For instance, load share and shed fraction can be defined for dynamic servers New York and London, as shown in
The on-line status report may be provided at one or more network locations specified by the subscriber. The locations may be specified as a Uniform Resource Locator (URL) and may be accessed through an HTTP request. A location of the status report may be specified in the initial policy, providing a constant link to the location.
The frequency with which the status report is accessed may also be specified explicitly. For example, it may require the relevant mechanisms (specifically, the ATC policy management mechanism or the monitoring mechanism of each name server) to poll the status report at a regular time, e.g., every thirty seconds.
A status report may be constructed using some standard language such as extendible Markup Language (XML). Such a status report may contain revised policies, which may be determined by the subscriber manually based on network performance, devised by an automated process based on network performance, or may be generated by an individual managed server. In such cases, the status report includes updated policies and when it is polled, the updated policies are used in future traffic control. For example, a managed server may re-define its load share of according to its dynamic capacity. When such dynamically defined load share information is accessed and used in enforcement, the traffic is controlled in a manner that is adaptive to the network dynamics.
When a status report is accessed, the relevant mechanisms (e.g., the ATC policy management mechanism in the ATC administrative framework or the ATC mechanism in corresponding name server) updates the associated policies based on the information contained in the status report. For example, a status report may indicate that the current load share for server New York should be 0.4 (down from 1.0 previously) with the same shed fraction (0.2) and the current load share for server London should be 0.5 (down from previous 1.0) with 0.4 shed fraction (40% shed fraction). An alternative scenario may be that the status report provides network performance data about each dynamic server and the ATC mechanism, after accessing the status report, determines how the current policies may be revised accordingly.
When the status report provides dynamic policies, it may, in general include, for each dynamic server, the following information:
<ServerStatus loadShare=“LS” shedFraction=“SF” online=“Boolean”/>
where LS and SF represent numerical numbers and “Boolean” represents a logical value of either “true” or “false”.
With respect to dynamic servers, with the mechanism of dynamic policies described above, a subscriber or a server can easily specify changes to existing policies (e.g., changes to load share, shed fraction policies and the On-line flag) without having to update an entire ATC policy hierarchy. In addition, a parameter (called “Down on line” in some embodiments) can be set in an ATC policy to control the behavior of the ATC mechanism under the circumstance that the ATC policy management mechanism is unable to properly access the status report due to reasons such as a failed polling or retrieval or failing to parse the status report's content. According to some embodiments of the present invention, three different exemplary approaches may be applied to handle the situation.
With the first approach, a subscriber may instruct an ATC mechanism in a name server not to select a particular server if the status report for that server can not be properly obtained. The subscriber may specify this policy by setting parameter “Down on line” to false. In this case, the ATC mechanism will not consider the underlying server as a candidate for selection process until the polling mechanism retrieves a valid status report.
The second approach to deal with a polling failure is to allow the ATC mechanism to select the underlying dynamic server according to its default policies (or original policies). To specify this solution, the subscriber may set the “Down on line” parameter true.
The third approach to handle the situation where the status report can not be properly retrieved is to allow the ATC mechanism to select the underlying server if its “on line” flag is on (it is available). To achieve this, a subscriber may specify only the “on line” ServerStatus in the status report, with parameter “Down on line” set to either true or false. In this case, the ATC mechanism will use a load share and a shed fraction specified in the policy when the “On line” flag is set to true. The load share and the load shed fraction may both be dynamically determined (either broadcasted from the ATC policy management mechanism or polled by the ATC mechanism 130 from the underlying server.
Static, dynamic and managed servers are primary servers, although their selection may be controlled by operationally different policies. The third type of servers are called overflow servers. They provide alternatives when primary servers, for some reason, can not be selected. Typically, an overflow server corresponds to a Canonical name or CName, pointing to a service providing a CDN (such as, e.g., the CDN service offered by Level 3 Communications). The choice of overflow servers may be determined based on the belief that they are in general always available. An overflow server may be defined through a window illustrated in
a)-10(b) show example subscriber policies, defined based on a set of resource servers, that govern the selection of the servers according to different criteria such as geographical location of an incoming request and the time zone of each of the locations. In
b) shows an exemplary decision tree embedded in a set of geographical policies that guide how the traffic should be directed to a set of servers. In the decision tree depicted in
c)-10(d) show another example decision tree constructed based on a set of resources (servers or CDNs) and subscriber policies that are defined based on geographical locations of an incoming request with overflow policies that allow the ATC mechanism to direct traffic to pre-defined overflow servers when primary servers are not available. In the Resource window shown in
The decision tree in
The next four examples (Examples 3-6) are based on an example subscriber which has ATC handling traffic for www.subscriber.com with a CNAME hand-off to www.subscriber.com.nsatc.net. The example subscriber has three datacenters, one in San Francisco, New York and London. The set of resources used by these four examples is shown in
With this policy (
Changing the shed fraction for any one server is not appropriate in this example because there is no defined lower tier or overflow server to receive shed traffic. Both servers are in the same tier.
A variation of this example is to put the New York server in Tier 2, i.e., have New York function as a backup. In this configuration, all traffic would be directed to the San Francisco server unless it was monitored as down or shedding in which case traffic would then go to New York. That is, if the San Francisco server had a shed fraction of, e.g., 10%, then ATC would return the New York server 10% of the time. When there is only one server in a Tier, there is no load share balancing for ATC to do, and the load share value becomes unimportant (unless zero).
This example demonstrates load share variation and server failover handling within a load sharing server set consisting only of managed servers.
This policy (
The shed fractions can be modified in one of two ways:
The subscriber may maintain an even load across all subscriber servers by setting the load share. This value typically does not need continuous adjustment. Then, if traffic levels increase and all servers are nearing their capacity, all servers begin shedding traffic to the Overflow server(s) (in this example, “dl.subscriber.com.c.footprint.net”, a CName for a CDN).
This example policy (
In this case, the subscriber does not have an overflow server defined. If the load on any server becomes too high, the shed fraction can be increased on that server and traffic will be sent to the lower tiers defined within each geographic region.
If both the San Francisco and New York servers are offline, all traffic is directed to the London server until the U.S. servers return. If all three servers are offline, ATC is unable to determine which servers are preferred and thus selects all three servers, with the hope that the end user may be able to connect to one of them.
This policy (
Note: The Other label is used, as before, to catch all non-specified world zones, and again within the U.S. time zone split to catch all remaining regions (in this case the western time zones of the U.S. and the rest of the Americas).
The following three examples (Examples 7-9) are based on an example subscriber which has ATC handling traffic for an origin server demo.nsatc.net and for the domain nsatc.net. The subscriber has three data centers in New York, San Francisco, and London, UK, with managed servers at each center. As shown in
k) shows a policy for the two sub-domains “images.nsatc.net” and “www.nsatc.net”. For the “images” sub-domain, end users are directed to servers in New York or San Francisco, in the ratio 1:2. That is, the ATC mechanism will return the IP address of a San Francisco server twice as often as that of a NY server. If the ATC mechanism detects that, e.g., the San Francisco server is down, it will stop return the IP address for the SF server and will only return the IP address for the NY server (until monitoring indicates that the SF server has returned). The subscriber may adjust the load share of the servers as in the earlier examples (either by logging in to the ATC to change the load share values manually, or by configuring ATC to poll a particular location for a configuration file). In case of overflow, the requests go to the CDN (with the CName “demo.c.footprint.net”).
The “www” sub-domain is handled somewhat differently in that the overflow for “www” will go to either of two CDNs. In this example, as shown in
As with the load share values for the other servers, the load share for the CDNs can also be adjusted by the subscriber on the fly or by logging in to the ATC.
Those skilled in the art will realize and understand, upon reading this description, that a subscriber may wish to adjust load share values for a CDN for performance and/or business reasons. E.g., a subscriber may have to pay a higher rate for one CDN than for another.
Those skilled in the art will realize and understand, upon reading this description, that the two CDNs may be owned and operated by different entities and may be in different domains.
Using this example, a subscriber can have its traffic handled initially by its origin servers (in NY or SF), with overflow being handled by a CDN. Overflow for the “images” sub-domain is handled by a single CDN, whereas overflow for the “www” sub-domain is handled by two CDNs.
m)-10(n) show a policy in which traffic to the sub-domain “www” is split equally between three CDNs. Each CDN has a load share of 1 and so an end-user will be returned a CNAME of one of the three CDNs, with each CDN having one third of the load. Again, the subscriber can modify the load share on the fly or by logging in to the ATC.
o) shows an exemplary policy with different rules for the sub-domains “ftp”, “download”, “www” and “images.”
End-user requests for the “download” sub-domain (download.nsatc.net) are handled by the subscriber's origin servers in NY and SF (in the ratio 1:2 (per
The “ftp” sub-domain is split based on country (i.e., on the country in which the end-user's request is believed to originate). End-user requests from the US are handled by the domain “us.demo.com” (a CNAME), and all other requests are handled by “international.demo.com” (also a CNAME). In this manner, ftp requests can be directed to different networks based on their country of origin.
The “images” sub-domain is split initially on three geographic zones:
I. Europe
II. Asia and Australia
III U.S. and other
Within each zone, end-user requests are further directed based on server load. E.g., in zone I (for Europe), end-user requests are handled by the tier 1 server(s) in the UK. If these fail (or are overloaded and need to shed load), requests then go to the tier 2 server(s) in NY. And if these fail (or are overloaded and must shed), requests are directed to the tier 3 server(s) in SF. Thus, in response to an end-user request for the sub-domain “images”, the request originating in Europe, the ATC will return the IP address of a server in the U.K. But if that server in the UK is overloaded, the ATC will return the IP address of a server in N.Y. And if that server is also overloaded, the ATC will return the IP address of a server in S.F.
The Asia/Australia zone operates similarly, except that the servers are put into different tiers, favoring the order SF, NY, UK. All other zones favor the order NY, SF, UK.
The “www” sub-domain is not split based on geographic location of the end-user. Traffic is handled by SF first, with NY as an overflow.
In addition to the various web-based interfaces described herein, there are also other means through which subscriber policies may be defined. As discussed earlier, for example, subscriber policies may also be constructed or specified in an XML file which can be downloaded by an ATC mechanism and used to control the traffic.
Policies defined via different means (e.g., through web based GUI or XML file) may be converted into some pre-defined format within the ATC mechanism. Such pre-defined format may be designed for efficiency in manage and handling the ATC policies. For example, the internal format for ATC policies may be designed so that, internally, the AMAs can conveniently store, access, and broadcast the ATC policies to the name server agents and the name server agents can efficiently apply the policies.
As mentioned earlier, in addition to management of the ATC policies, the AMA may also monitor the performance of name servers and generates viewable DNS log reports. The monitoring mechanism may gather performance information from either the DNS logs of the name servers or the events trapped from the name servers. Such gathered information may be used by the report generation mechanism to construct informative reports. The report generation mechanism may also make such reports available to the subscribers via the secure web-based GUI.
b) shows a plot of log data related to a specified server. The log data is presented in a plot, generated based on the log entries of a particular server against a specified period of time (X-axis) with certain resolution. The exemplary plot reflects the volume of the traffic directed to the underlying server during a period time between 2:30 pm and 3:30 pm on Sep. 24, 2002. The traffic volume is described in terms of number of replies generated by the underlying server per minute (Y-axis). It can be seen in this example plot that there is a surge in traffic volume between 2:36 pm and 2:38 pm.
In some implementations, end-user resolvers may use a preferred subset of the ATC nameservers. This may be implemented by providing an initial pool of ATC name servers which respond to new requests from new end-user resolvers. When an end-user resolver first requires name resolution from an ATC nameserver, that resolver is directed (e.g., by the DNS system) to one of the pool of ATC nameservers. When that ATC nameserver provides an answer to the end-user resolver, it also gives that resolver a list of preferred ATC nameservers for that resolver, i.e., one or more ATC nameserver that are better suited to handle name resolution for that end-user resolver. An ATC nameserver may be better suited to handle requests from an end-user resolver for a number of reasons, e.g., because it is electronically closer to the resolver, because it is geographically appropriate for that resolver, etc. Those skilled in the art will realize, upon reading this description, that different and/or other measures of suitability may be used to provide an end-user resolver with a list of preferred ATC nameservers. Once an end-user resolver has a list of preferred ATC nameservers, that resolver will make future requests of nameservers in its list (until the entries for those nameservers expire or are replaced).
The various mechanisms described herein, including, without limitation, the adaptive traffic control (ATC) mechanism, the location determination mechanism, policy editing mechanism, administrative policy update mechanism, ATC policy management mechanism, report generation mechanism, a monitoring mechanism, and an administrative master backup mechanism may be implemented in hardware, software or a combination thereof. When implemented in software, they may be implemented in any type of appropriate interpreted or compiled programming language. When implemented fully or partially in software, aspects of the invention can reside on any memory or storage medium, including but not limited to a ROM, a disk, an ASIC, a PROM and the like. While the invention has been described with reference to particular mechanisms (algorithms, processes and functions) and architectures, one skilled in the art would realize that other mechanisms and/or architectures could be used while still achieving the invention.
When the various mechanisms of the present invention are running on a particular machine (e.g., at a client or on a server), they may reside in the memory of the machine or on a storage device or in a combination. Further, while many of the operations have been shown as being performed in a particular order, one skilled in the art would realize that other orders, including some parallelization of operations, are possible and are considered to be within the scope of the invention.
The present invention has been described above in connection with a preferred embodiment thereof; however, this has been done for purposes of illustration only, and the invention is not so limited. Indeed, variations of the invention will be readily apparent to those skilled in the art. Such variations also fall within the scope of the invention. Thus, while the invention has been described with reference to the certain illustrated embodiments, the words that have been used herein are words of description, rather than words of limitation. Changes may be made, within the purview of the appended claims, without departing from the scope and spirit of the invention in its aspects. Although the invention has been described herein with reference to particular structures, acts, and materials, the invention is not to be limited to the particulars disclosed, but rather extends to all equivalent structures, acts, and, materials, such as are within the scope of the appended claims.
The processing described may be performed by a properly programmed general-purpose computer alone or in connection with a special purpose computer. Such processing may be performed by a single platform or by a distributed processing platform. In addition, such processing and functionality can be implemented in the form of special purpose hardware or in the form of software being run by a general-purpose computer. Any data handled in such processing or created as a result of such processing can be stored in any memory as is conventional in the art. By way of example, such data may be stored in a temporary memory, such as in the RAM of a given computer system or subsystem. In addition, or in the alternative, such data may be stored in longer-term storage devices, for example, magnetic disks, rewritable optical disks, and so on. For purposes of the disclosure herein, a computer-readable media may comprise any form of data storage mechanism, including such existing memory technologies as well as hardware or circuit representations of such structures and of such data.
This application is a continuation of U.S. application Ser. No. 11/976,648, filed Oct. 26, 2007, titled, “Policy-Based Content Delivery Network Selection,” (which will issue on Dec. 28, 2010 as U.S. Pat. No. 7,860,964) and which is a continuation-in-part (CIP) of U.S. application Ser. No. 10/259,497, titled “Configurable adaptive global traffic control and management,” filed Sep. 30, 2002, now U.S. Pat. No. 7,822,871, the entire contents of which are hereby incorporated herein by reference for all purposes. Application Ser. No. 10/259,497 is related to and claims priority from provisional U.S. Patent Application No. 60/325,177, titled “Configurable Adaptive Global Traffic Control and Management,” filed Sep. 28, 2001, the entire contents of which are incorporated herein by reference for all purposes. This application is also related to and claims priority from U.S. application Ser. No. 11/932,162, filed Oct. 31, 2007, the entire contents of which are incorporated herein by reference for all purposes.
Number | Name | Date | Kind |
---|---|---|---|
4495570 | Kitajima et al. | Jan 1985 | A |
4591983 | Bennett et al. | May 1986 | A |
4594704 | Ollivier | Jun 1986 | A |
4726017 | Krum et al. | Feb 1988 | A |
4803641 | Hardy et al. | Feb 1989 | A |
4839798 | Eguchi et al. | Jun 1989 | A |
4847784 | Clancey | Jul 1989 | A |
4920432 | Eggers et al. | Apr 1990 | A |
4922417 | Churm et al. | May 1990 | A |
4943932 | Lark et al. | Jul 1990 | A |
4949187 | Cohen | Aug 1990 | A |
4949248 | Caro | Aug 1990 | A |
5029232 | Nall | Jul 1991 | A |
5130792 | Tindell et al. | Jul 1992 | A |
5132992 | Yurt et al. | Jul 1992 | A |
5136716 | Harvey | Aug 1992 | A |
5172413 | Bradley et al. | Dec 1992 | A |
5191573 | Hair | Mar 1993 | A |
5253275 | Yurt et al. | Oct 1993 | A |
5253341 | Rozmanith et al. | Oct 1993 | A |
5287499 | Nemes | Feb 1994 | A |
5287537 | Newmark et al. | Feb 1994 | A |
5291554 | Morales | Mar 1994 | A |
5341477 | Pitkin et al. | Aug 1994 | A |
5371532 | Gelman et al. | Dec 1994 | A |
5410343 | Coddington et al. | Apr 1995 | A |
5414455 | Hooper et al. | May 1995 | A |
5442389 | Blahut et al. | Aug 1995 | A |
5442390 | Hooper et al. | Aug 1995 | A |
5442749 | Northcutt et al. | Aug 1995 | A |
5471622 | Eadline | Nov 1995 | A |
5475615 | Lin | Dec 1995 | A |
5508732 | Bottomley et al. | Apr 1996 | A |
5515511 | Nguyen et al. | May 1996 | A |
5519435 | Anderson | May 1996 | A |
5522070 | Sumitomo | May 1996 | A |
5528281 | Grady et al. | Jun 1996 | A |
5539621 | Kikinis | Jul 1996 | A |
5542087 | Neimat et al. | Jul 1996 | A |
5544313 | Shachnai et al. | Aug 1996 | A |
5544327 | Dan et al. | Aug 1996 | A |
5550577 | Verbiest et al. | Aug 1996 | A |
5550863 | Yurt et al. | Aug 1996 | A |
5550982 | Long et al. | Aug 1996 | A |
5557317 | Nishio et al. | Sep 1996 | A |
5572643 | Judson | Nov 1996 | A |
5590288 | Castor | Dec 1996 | A |
5592611 | Midgely | Jan 1997 | A |
5594910 | Filepp et al. | Jan 1997 | A |
5603026 | Demers et al. | Feb 1997 | A |
5619648 | Canale | Apr 1997 | A |
5623656 | Lyons | Apr 1997 | A |
5625781 | Cline | Apr 1997 | A |
5627829 | Gleeson et al. | May 1997 | A |
5630067 | Kindell | May 1997 | A |
5633999 | Clowes | May 1997 | A |
5634006 | Baugher et al. | May 1997 | A |
5638443 | Stefik et al. | Jun 1997 | A |
5644714 | Kikinis | Jul 1997 | A |
5646676 | Dewkett et al. | Jul 1997 | A |
5649186 | Ferguson | Jul 1997 | A |
5659729 | Nielsen | Aug 1997 | A |
5666362 | Chen | Sep 1997 | A |
5671279 | Elgamai | Sep 1997 | A |
5675734 | Hair | Oct 1997 | A |
5682512 | Tetrick | Oct 1997 | A |
5699513 | Feigen et al. | Dec 1997 | A |
5708780 | Levergood et al. | Jan 1998 | A |
5712979 | Graber et al. | Jan 1998 | A |
5715453 | Stewart | Feb 1998 | A |
5721914 | DeVries | Feb 1998 | A |
5734831 | Sanders | Mar 1998 | A |
5740423 | Logan et al. | Apr 1998 | A |
5742762 | Scholl | Apr 1998 | A |
5751961 | Smyk | May 1998 | A |
5761507 | Govett | Jun 1998 | A |
5761663 | Largarde et al. | Jun 1998 | A |
5764906 | Edelstein et al. | Jun 1998 | A |
5774660 | Brendel et al. | Jun 1998 | A |
5774668 | Choquier et al. | Jun 1998 | A |
5777988 | Cisneros | Jul 1998 | A |
5777989 | McGarvey | Jul 1998 | A |
5784058 | LaStrange et al. | Jul 1998 | A |
5796952 | Davis | Aug 1998 | A |
5799141 | Galipeau et al. | Aug 1998 | A |
5802106 | Packer | Sep 1998 | A |
5802291 | Balick et al. | Sep 1998 | A |
5812769 | Graber et al. | Sep 1998 | A |
5815664 | Asano | Sep 1998 | A |
5826031 | Nielsen | Oct 1998 | A |
5828847 | Gehr | Oct 1998 | A |
5832506 | Kuzma | Nov 1998 | A |
5832514 | Norin et al. | Nov 1998 | A |
5835718 | Blewett | Nov 1998 | A |
5838906 | Doyle et al. | Nov 1998 | A |
5845303 | Templeman | Dec 1998 | A |
5856974 | Gervais et al. | Jan 1999 | A |
5862339 | Bonnaure | Jan 1999 | A |
5867706 | Martin et al. | Feb 1999 | A |
5867799 | Lang et al. | Feb 1999 | A |
5870546 | Kirsch | Feb 1999 | A |
5870559 | Leshem et al. | Feb 1999 | A |
5878212 | Civanlar et al. | Mar 1999 | A |
5884038 | Kapoor | Mar 1999 | A |
5890171 | Blumer et al. | Mar 1999 | A |
5893116 | Simmonds et al. | Apr 1999 | A |
5894554 | Lowery et al. | Apr 1999 | A |
5896533 | Ramos et al. | Apr 1999 | A |
5903723 | Beck et al. | May 1999 | A |
5907704 | Gudmundson et al. | May 1999 | A |
5913028 | Wang et al. | Jun 1999 | A |
5913033 | Grout | Jun 1999 | A |
5918010 | Appleman et al. | Jun 1999 | A |
5919247 | Van Hoff et al. | Jul 1999 | A |
5920701 | Miller et al. | Jul 1999 | A |
5931904 | Banga | Aug 1999 | A |
5933832 | Suzuoka et al. | Aug 1999 | A |
5935207 | Logue et al. | Aug 1999 | A |
5944780 | Chase | Aug 1999 | A |
5945989 | Freishtat et al. | Aug 1999 | A |
5956489 | San Andres et al. | Sep 1999 | A |
5956716 | Kenner et al. | Sep 1999 | A |
5958008 | Pogrebisky et al. | Sep 1999 | A |
5961596 | Takubo et al. | Oct 1999 | A |
5966440 | Hair | Oct 1999 | A |
5968121 | Logan et al. | Oct 1999 | A |
5973696 | Agranat et al. | Oct 1999 | A |
5978791 | Farber et al. | Nov 1999 | A |
5983214 | Lang et al. | Nov 1999 | A |
5983227 | Nazem et al. | Nov 1999 | A |
5987606 | Cirasole et al. | Nov 1999 | A |
5991809 | Kriegsman | Nov 1999 | A |
5996025 | Day | Nov 1999 | A |
6002720 | Yurt et al. | Dec 1999 | A |
6003030 | Kenner et al. | Dec 1999 | A |
6006264 | Colby et al. | Dec 1999 | A |
6012090 | Chung et al. | Jan 2000 | A |
6014686 | Elnozahy et al. | Jan 2000 | A |
6014698 | Griffiths | Jan 2000 | A |
6018516 | Packer | Jan 2000 | A |
6021426 | Douglis | Feb 2000 | A |
6026440 | Sharder et al. | Feb 2000 | A |
6029175 | Chow et al. | Feb 2000 | A |
6029176 | Cannon | Feb 2000 | A |
6035332 | Ingrassia, Jr. et al. | Mar 2000 | A |
6038216 | Packer | Mar 2000 | A |
6038310 | Hollywood et al. | Mar 2000 | A |
6038610 | Belfiore et al. | Mar 2000 | A |
6041307 | Ahuja et al. | Mar 2000 | A |
6041324 | Earl et al. | Mar 2000 | A |
6044405 | Driscoll, III et al. | Mar 2000 | A |
6046980 | Packer | Apr 2000 | A |
6049831 | Gardell et al. | Apr 2000 | A |
6052718 | Gifford | Apr 2000 | A |
6052730 | Feliciano et al. | Apr 2000 | A |
6052788 | Wesinger et al. | Apr 2000 | A |
6065051 | Steele et al. | May 2000 | A |
6065062 | Periasamy et al. | May 2000 | A |
6070191 | Narendran et al. | May 2000 | A |
6081829 | Sidana | Jun 2000 | A |
6092112 | Fukushige | Jul 2000 | A |
6092178 | Jindal et al. | Jul 2000 | A |
6092204 | Baker | Jul 2000 | A |
6098078 | Gehani | Aug 2000 | A |
6105028 | Sullivan et al. | Aug 2000 | A |
6108673 | Brandt et al. | Aug 2000 | A |
6108703 | Leighton et al. | Aug 2000 | A |
6112231 | DeSimone et al. | Aug 2000 | A |
6112239 | Kenner et al. | Aug 2000 | A |
6112240 | Pogue et al. | Aug 2000 | A |
6115357 | Packer et al. | Sep 2000 | A |
6115752 | Chauhan | Sep 2000 | A |
6119143 | Dias et al. | Sep 2000 | A |
6125388 | Reisman | Sep 2000 | A |
6125394 | Rabinovich | Sep 2000 | A |
6128601 | Van Horne et al. | Oct 2000 | A |
6128660 | Grimm et al. | Oct 2000 | A |
6130890 | Leinwand et al. | Oct 2000 | A |
6134583 | Herriot | Oct 2000 | A |
6144375 | Jain et al. | Nov 2000 | A |
6144702 | Yurt et al. | Nov 2000 | A |
6144996 | Starnes et al. | Nov 2000 | A |
6151624 | Teare et al. | Nov 2000 | A |
6154738 | Call | Nov 2000 | A |
6154744 | Kenner et al. | Nov 2000 | A |
6154753 | McFarland | Nov 2000 | A |
6154777 | Ebrahim | Nov 2000 | A |
6157648 | Volt et al. | Dec 2000 | A |
6163779 | Mantha et al. | Dec 2000 | A |
6167427 | Rabinovich et al. | Dec 2000 | A |
6173311 | Hassett et al. | Jan 2001 | B1 |
6173322 | Hu | Jan 2001 | B1 |
6178160 | Bolton et al. | Jan 2001 | B1 |
6181867 | Kenner et al. | Jan 2001 | B1 |
6185598 | Farber | Feb 2001 | B1 |
6185619 | Joffe et al. | Feb 2001 | B1 |
6189030 | Kirsch et al. | Feb 2001 | B1 |
6189039 | Harvey | Feb 2001 | B1 |
6195680 | Goldszmidt et al. | Feb 2001 | B1 |
6205120 | Packer et al. | Mar 2001 | B1 |
6226618 | Downs | May 2001 | B1 |
6226642 | Beranek et al. | May 2001 | B1 |
6230196 | Guenthner et al. | May 2001 | B1 |
6243760 | Armbruster et al. | Jun 2001 | B1 |
6249810 | Kiraly | Jun 2001 | B1 |
6256675 | Rabinovich | Jul 2001 | B1 |
6263313 | Milsted | Jul 2001 | B1 |
6266699 | Sevcik | Jul 2001 | B1 |
6269394 | Kenner et al. | Jul 2001 | B1 |
6272566 | Craft | Aug 2001 | B1 |
6275470 | Ricciulli | Aug 2001 | B1 |
6282569 | Wallis et al. | Aug 2001 | B1 |
6282574 | Voit | Aug 2001 | B1 |
6286045 | Griffiths et al. | Sep 2001 | B1 |
6298041 | Packer | Oct 2001 | B1 |
6311214 | Rhoads | Oct 2001 | B1 |
6314565 | Kenner et al. | Nov 2001 | B1 |
6327622 | Jindal et al. | Dec 2001 | B1 |
6330602 | Law et al. | Dec 2001 | B1 |
6332195 | Green et al. | Dec 2001 | B1 |
6338044 | Cook et al. | Jan 2002 | B1 |
6347085 | Kelly | Feb 2002 | B2 |
6360256 | Lim | Mar 2002 | B1 |
6370571 | Medin, Jr. | Apr 2002 | B1 |
6370580 | Kriegsman | Apr 2002 | B2 |
6398245 | Gruse | Jun 2002 | B1 |
6405252 | Gupta et al. | Jun 2002 | B1 |
6405257 | Gersht et al. | Jun 2002 | B1 |
6412000 | Riddle et al. | Jun 2002 | B1 |
6415280 | Farber et al. | Jul 2002 | B1 |
6418421 | Hurtado | Jul 2002 | B1 |
6421726 | Kenner et al. | Jul 2002 | B1 |
6430618 | Karger et al. | Aug 2002 | B1 |
6442549 | Schneider | Aug 2002 | B1 |
6456630 | Packer et al. | Sep 2002 | B1 |
6460082 | Lumelsky | Oct 2002 | B1 |
6460085 | Toporek et al. | Oct 2002 | B1 |
6463454 | Lumelsky | Oct 2002 | B1 |
6463508 | Wolf | Oct 2002 | B1 |
6470389 | Chung et al. | Oct 2002 | B1 |
6473405 | Ricciulli | Oct 2002 | B2 |
6480893 | Kriegsman | Nov 2002 | B2 |
6484143 | Swildens et al. | Nov 2002 | B1 |
6484204 | Rabinovich | Nov 2002 | B1 |
6484261 | Wiegel | Nov 2002 | B1 |
6490580 | Dey et al. | Dec 2002 | B1 |
6493707 | Dey et al. | Dec 2002 | B1 |
6496856 | Kenner et al. | Dec 2002 | B1 |
6502125 | Kenner et al. | Dec 2002 | B1 |
6502215 | Raad et al. | Dec 2002 | B2 |
6505248 | Casper et al. | Jan 2003 | B1 |
6529477 | Toporek et al. | Mar 2003 | B1 |
6553413 | Leighton et al. | Apr 2003 | B1 |
6553420 | Karger et al. | Apr 2003 | B1 |
6557054 | Reisman | Apr 2003 | B2 |
6564251 | Katariya et al. | May 2003 | B2 |
6577595 | Counterman | Jun 2003 | B1 |
6581090 | Lindbo et al. | Jun 2003 | B1 |
6584083 | Toporek et al. | Jun 2003 | B1 |
6587837 | Spagna | Jul 2003 | B1 |
6591299 | Riddle et al. | Jul 2003 | B2 |
6611862 | Reisman | Aug 2003 | B2 |
6625643 | Colby et al. | Sep 2003 | B1 |
6654344 | Toporek et al. | Nov 2003 | B1 |
6654807 | Farber et al. | Nov 2003 | B2 |
6658464 | Reisman | Dec 2003 | B2 |
6665706 | Kenner et al. | Dec 2003 | B2 |
6665726 | Leighton et al. | Dec 2003 | B1 |
6691148 | Zinky et al. | Feb 2004 | B1 |
6694358 | Swildens et al. | Feb 2004 | B1 |
6699418 | Okada et al. | Mar 2004 | B2 |
6708137 | Carley | Mar 2004 | B2 |
6718328 | Norris | Apr 2004 | B1 |
6741563 | Packer | May 2004 | B2 |
6751673 | Shaw | Jun 2004 | B2 |
6754699 | Swildens et al. | Jun 2004 | B2 |
6754706 | Swildens et al. | Jun 2004 | B1 |
6763377 | Belknap et al. | Jul 2004 | B1 |
6763388 | Tsimelzon | Jul 2004 | B1 |
6778502 | Ricciulli | Aug 2004 | B2 |
6785704 | McCanne | Aug 2004 | B1 |
6799221 | Kenner et al. | Sep 2004 | B1 |
6801576 | Haldeman et al. | Oct 2004 | B1 |
6834306 | Tsimelzon | Dec 2004 | B1 |
6842604 | Cook et al. | Jan 2005 | B1 |
6859791 | Spagna | Feb 2005 | B1 |
6870851 | Leinwand et al. | Mar 2005 | B1 |
6874032 | Gersht et al. | Mar 2005 | B2 |
6901604 | Kiraly | May 2005 | B1 |
6915329 | Kriegsman | Jul 2005 | B2 |
6928442 | Farber et al. | Aug 2005 | B2 |
6934255 | Toporek et al. | Aug 2005 | B1 |
6950623 | Brown et al. | Sep 2005 | B2 |
6963910 | Belknap | Nov 2005 | B1 |
6963980 | Mattsson | Nov 2005 | B1 |
6963981 | Bailey et al. | Nov 2005 | B1 |
6965890 | Dey et al. | Nov 2005 | B1 |
6970432 | Hankins et al. | Nov 2005 | B1 |
6973485 | Ebata et al. | Dec 2005 | B2 |
6973490 | Robertson et al. | Dec 2005 | B1 |
6976090 | Ben-shaul | Dec 2005 | B2 |
6981050 | Tobias et al. | Dec 2005 | B1 |
6981180 | Bailey et al. | Dec 2005 | B1 |
6996616 | Leighton et al. | Feb 2006 | B1 |
7003572 | Lownsbrough et al. | Feb 2006 | B1 |
7007089 | Freedman | Feb 2006 | B2 |
7010578 | Lewin et al. | Mar 2006 | B1 |
7012900 | Riddle | Mar 2006 | B1 |
7039633 | Dey et al. | May 2006 | B1 |
7047300 | Oehrke et al. | May 2006 | B1 |
7054935 | Farber et al. | May 2006 | B2 |
7058706 | Iyer et al. | Jun 2006 | B1 |
7069177 | Carley | Jun 2006 | B2 |
7096266 | Lewin et al. | Aug 2006 | B2 |
7103564 | Ehnebuske | Sep 2006 | B1 |
7103645 | Leighton et al. | Sep 2006 | B2 |
7110984 | Spagna | Sep 2006 | B1 |
7117259 | Rohwer | Oct 2006 | B1 |
7185052 | Day | Feb 2007 | B2 |
7188085 | Pelletier | Mar 2007 | B2 |
7206748 | Gruse | Apr 2007 | B1 |
7219153 | Day | May 2007 | B1 |
7254645 | Nishi | Aug 2007 | B2 |
7373644 | Aborn | May 2008 | B2 |
7577754 | Garcia-Luna-Aceves et al. | Aug 2009 | B2 |
20010029525 | Lahr | Oct 2001 | A1 |
20010056500 | Farber et al. | Dec 2001 | A1 |
20020018449 | Ricciulli | Feb 2002 | A1 |
20020023164 | Lahr | Feb 2002 | A1 |
20020023165 | Lahr | Feb 2002 | A1 |
20020040404 | Lahr | Apr 2002 | A1 |
20020042817 | Lahr | Apr 2002 | A1 |
20020046273 | Lahr et al. | Apr 2002 | A1 |
20020046405 | Lahr | Apr 2002 | A1 |
20020049857 | Farber et al. | Apr 2002 | A1 |
20020059592 | Kiraly | May 2002 | A1 |
20020066038 | Mattsson | May 2002 | A1 |
20020073199 | Levine et al. | Jun 2002 | A1 |
20020078233 | Biliris | Jun 2002 | A1 |
20020082999 | Lee et al. | Jun 2002 | A1 |
20020083124 | Knox et al. | Jun 2002 | A1 |
20020087684 | Foth | Jul 2002 | A1 |
20020099850 | Farber et al. | Jul 2002 | A1 |
20020116444 | Chaudhri et al. | Aug 2002 | A1 |
20020124080 | Leighton et al. | Sep 2002 | A1 |
20020129134 | Leighton et al. | Sep 2002 | A1 |
20020131645 | Hamilton | Sep 2002 | A1 |
20020143798 | Lisiecki et al. | Oct 2002 | A1 |
20020143888 | Lisiecki et al. | Oct 2002 | A1 |
20020145975 | MeLampy et al. | Oct 2002 | A1 |
20020147774 | Lisiecki et al. | Oct 2002 | A1 |
20020163882 | Bornstein et al. | Nov 2002 | A1 |
20020199016 | Freedman | Dec 2002 | A1 |
20030009444 | Eidler et al. | Jan 2003 | A1 |
20030018966 | Cook et al. | Jan 2003 | A1 |
20030028623 | Hennessey et al. | Feb 2003 | A1 |
20030028626 | Hennessey et al. | Feb 2003 | A1 |
20030028777 | Hennessey et al. | Feb 2003 | A1 |
20030055972 | Fuller et al. | Mar 2003 | A1 |
20030061263 | Riddle | Mar 2003 | A1 |
20030061280 | Bulson et al. | Mar 2003 | A1 |
20030078888 | Lee et al. | Apr 2003 | A1 |
20030078889 | Lee et al. | Apr 2003 | A1 |
20030105604 | Ash et al. | Jun 2003 | A1 |
20030149581 | Chaudhri | Aug 2003 | A1 |
20040010588 | Slater et al. | Jan 2004 | A1 |
20040022194 | Ricciulli | Feb 2004 | A1 |
20040039798 | Hotz et al. | Feb 2004 | A1 |
20040064832 | Tsukidate et al. | Apr 2004 | A1 |
20040139097 | Farber et al. | Jul 2004 | A1 |
20040177148 | Tsimelzon | Sep 2004 | A1 |
20050033858 | Swildens et al. | Feb 2005 | A1 |
20050038851 | Kriegsman | Feb 2005 | A1 |
20050060493 | Krissell et al. | Mar 2005 | A1 |
20050100027 | Leinwand et al. | May 2005 | A1 |
20050114296 | Farber et al. | May 2005 | A1 |
20050240966 | Hindle et al. | Oct 2005 | A1 |
20050262104 | Robertson et al. | Nov 2005 | A1 |
20060143293 | Freedman | Jun 2006 | A1 |
20070055764 | Dilley et al. | Mar 2007 | A1 |
20090254661 | Fullagar et al. | Oct 2009 | A1 |
20100169772 | Stallings et al. | Jul 2010 | A1 |
20100312861 | Kolhi et al. | Dec 2010 | A1 |
20100332595 | Fullagar et al. | Dec 2010 | A1 |
Number | Date | Country |
---|---|---|
763380 | Nov 2003 | AU |
2202572 | Oct 1998 | CA |
2335661 | Sep 2001 | CA |
2335662 | Sep 2001 | CA |
2467998 | Apr 2006 | CA |
ZL99810853.7 | Aug 2004 | CN |
0 800 143 | Oct 1997 | EP |
0801487 | Oct 1997 | EP |
0 817 020 | Jan 1998 | EP |
0817444 | Jan 1998 | EP |
0824236 | Feb 1998 | EP |
0865180 | Sep 1998 | EP |
1104555 | Jun 2001 | EP |
2 281 793 | Mar 1995 | GB |
2353877 | Mar 2004 | GB |
140935 | Mar 2006 | IL |
5-130144 | May 1993 | JP |
05162529 | Jun 1993 | JP |
7066829 | Mar 1995 | JP |
08328583 | Sep 1996 | JP |
10-027148 | Jan 1998 | JP |
10-70571 | Mar 1998 | JP |
10-093552 | Apr 1998 | JP |
10-126445 | May 1998 | JP |
2000259539 | Sep 2000 | JP |
2001-053793 | Feb 2001 | JP |
2002522995 | Jul 2002 | JP |
3566626 | Jun 2004 | JP |
2005124165 | May 2005 | JP |
3762649 | Jan 2006 | JP |
1999-72167 | Sep 1999 | KR |
176482 | Aug 2003 | NI |
WO 9642041 | Dec 1996 | WO |
WO 9711429 | Mar 1997 | WO |
WO 9729423 | Aug 1997 | WO |
WO 9804985 | Feb 1998 | WO |
WO 9806033 | Feb 1998 | WO |
WO 99 09726 | Feb 1999 | WO |
WO 99 29083 | Jun 1999 | WO |
WO 0014633 | Mar 2000 | WO |
WO 00 52594 | Sep 2000 | WO |
WO-0052594 | Sep 2000 | WO |
WO 0062502 | Oct 2000 | WO |
WO 0135601 | May 2001 | WO |
WO 0152497 | Jul 2001 | WO |
Entry |
---|
Chinese Reexamination Notification, dated Jul. 5, 2012, Appl. No. 02821371.8, 5 pgs. |
“International Search Report, dated Jan. 23, 2012, Intl Appl. No. PCT/US11/51252”, Intl Filing Date Sep. 12, 2011, 3 pgs. |
“Written Opinion, dated Jan. 23, 2012, Intl Appl No. PCT/US11/51252”, Intl Filing Date Sep. 12, 2011, 5 pgs. |
Chinese Reexamination Notification, dated Sep. 13, 2012, Chinese App. No. 02821371.8, including translation, 10 pgs. |
European Examination Report dated Sep. 6, 2012, EP Application No. 10178695.2, 4 pgs. |
“Content Management Technology/Industry News,” Content Technologies Trends and Advice, Gilbane Report, News for Jun. 1999 [21 pages]. |
“Exporting Web Server Final Report,” http://www.cs.technion.ac.il/Labs/Lccn/projects/spring97/project4/final—report.html, Spring 1997 (downloaded Jul. 7, 2007). |
“Local Area Network Server Replacement Procedure”, IBM Technical Disclosure Bulletin, vol. 38, No. 1, (Jan. 1995), 235-236. |
“Patent Abstracts of Japan, Electronic Mail Multiplexing System and Communication Control Method in the System” (Appln. No. JP19930162529), (Jun. 30, 1993) (Pub. No. JP 7066829). |
“Patent Abstracts of Japan, Method and Device for Repeating and Converting Information”, (Appln. No. JP1996 0328583) (Pub. No. JP10171727), Jun. 26, 1998. |
Adler, R. M., “Distributed Coordination Models for Client/Server Computing,” Computer 28, 4 (Apr. 1995), 14-22. |
Aggarwal, A. et al., “Performance of Dynamic Replication Schemes for an Internet Hosting Service”. Technical Report, AT&T Labs, Oct. 1998. |
Andresen et al., “SWEB: Towards a Scalable World Wide Web Server on Multicomputers”, Proc. IPPS, (Apr. 15, 1996), 850-856. |
Andresen, D., et al., Multiprocessor scheduling with client resources to improve the response time of WWW applications, Proc. 11th Int'l Conf. on Supercomputing (Austria, Jul. 1997). ICS '97. ACM Press, NY, NY, 92-99. |
Awerbuch, B. et al., Distributed Paging for General Networks. In Proc. of the 7th ACM-SIAM Symposium on Discrete Algorithms, pp. 574-583, Jan. 1996. |
Awerbuch, et al., Competitive Distributed File Allocation. in Proc. of the 25th Ann. ACM Symp. on Theory of Computing, pp. 164-173, May 1993. |
Baentsch, M, et al. “Enhancing the Web's Infrastructure: From Caching to Replication.” IEEE Internet Computing, 1(2):18-27, Mar. 1997. |
Bartal, Y., et al., “Competitive Algorithms for Distributed Data Management”, 24th Annual ACM STOC, 5/92, Victoria, B.C. Canada. |
Basturk, E., et al., “Using network layer anycast for load distribution in the Internet,” Tech. Rep., IBM T.J. Watson Research Center, Jul. 1997 (21 pgs.). |
Berners-Lee, T., et al., RFC 1738—Uniform Resource Locators, Dec. 1994. |
Bestavros, A., “Speculative Data Dissemination and Service to Reduce Server Load, Network Traffic and Service Time in Distributed Information Systems”, In Proc. ICDE '96: The 1996 Int'l Conf. on Data Engineering, (Mar. 1996), 4 pages. |
Bestavros, et al., “Server-Initiated Document Dissemination for the WWW,” IEEE Data Engineering Bulletin 19(3):3-11, Sep. 1996. |
Bhattacharjee et al., “Application-layer anycasting,” in Proc. IEEE INFOCOM '97, Apr. 1997. |
Braun, H., et al., “Web traffic characterization: an assessment of the impact of caching documents from NCSA's web server,” Comput. Netw. ISDN Syst. 28, 1-2 (Dec. 1995), 37-51. |
Brisco, T. P. RFC 1794: DNS support for load balancing, Apr. 1995. |
Carter et al., “Dynamic server selection using bandwidth probing in wide-area networks,” Tech. Rep. BU-CS-96-007, Comp. Sci. Dept., Boston University, Mar. 1996. |
Carter et al., Server selection using dynamic path characterization in Wide-Area Networks, IEEE INFOCOM '97, Apr. 7-12, 1997 (pp. 1014-1021). |
Carter, J. Lawrence et al., “Universal Classes of Hash Functions”, Journal of Computer and System Sciences, vol. 18, No. 2, (Apr. 1979), 143-154. |
Cate, V. “Alex: a global file system”, in Proc. Usenix Conf. on File Systems, May 1992, pp. 1-11. |
Chankhunthod, A. et al., “A Hierarchical Internet Object Cache”, Proc. of the 1996 USENIX Technical Conf., Jan. 1996, pp. 153-163. |
Cisco Systems, Inc., Cisco DistributedDirector 2500 Series Installation and Configuration Guide, pp. xix-xxii; 1-1 to 1-12; 6-1 to 6-18; 7-1 to 7-18; 8-1 to 8-24, Dec. 1997, downloaded Apr. 2007: http://www.cisco.com/univercd/cc/td/doc/product/iaabu/distrdir/dd2501/. |
Cisco Systems, Inc., Cisco DistributedDirector 4700-M Installation and Configuration Guide, pgs. xix-xxii; 1-1 to 1-14; 7-1 to 7-18, 8-1 to 8-20; Dec. 1997, [downloaded Apr. 2007 from http://www.cisco.com/univercd/cc/td/doc/product/iaabu/distrdir/dd4700m/]. |
Claims (amended) filed by applicant in Japanese Patent Application No. 2003-531370, Jan. 17, 2008 (English translation provided by foreign associate) [12 pgs.]. |
Claims (amended) filed by applicant in KIPO for Korean Patent Application No. KR 10 2004 7004613, Jul. 24, 2009 (English translation provided by foreign associate) [10 pgs.]. |
CN Appln. No. 02821371.8—Jan. 30, 2011 SIPO Office Action. |
Cohen, J., et al., “Cache Array Routing Protocol v1.1”, Sep. 29, 1997; http://tools.ietf.org/id/draft-vinod-carp-v1-01.txt (Last-Modified: Wed, Oct. 1, 1997). |
Colajanni, M. and Yu, P. S. 1997. Adaptive TTL schemes for load balancing of distributed Web servers. SIGMETRICS Perform. Eval. Rev. 25, 2 (Sep. 1997), 36-42. |
Communication (1 page) and European Search Report (with Annex) for Appln. No. EP 02799672.7, dated Apr. 17, 2008. |
Cormen, T. H., et al., “Introduction to Algorithms”, The MIT Press, Cambridge, Massachusetts, (1994), 219-243, 991-993. |
Crovella et al., Dynamic server selection in the Internet, 3rd IEEE Workshop on the Arch. and Implementation of High Performance Computer Sys. '95, pp. 158-162, Aug. 1995. |
Danzig, P. B., et al., “An analysis of wide-area name server traffic: a study of the Internet Domain Name System,” Conf. Proc. Communications Architectures & Protocols (Aug. 1992). D. Oran, Ed. SIGCOMM '92. ACM Press, New York, NY, 281-292. |
De Bra, P.M.E., et al., “Information Retrieval in the World Wide Web: Making Client-Based Searching Feasible”, Computer Networks and ISDN Systems, NL, North Holland Publishing, Amsterdam, vol. 27, No. 2, ISSN: 0169-7552, (Nov. 1, 1994), 183-192. |
Deering, S. E., et al, “Multicast routing in datagram internetworks and extended LANs,” ACM Trans. Comput. Syst. 8, 2 (May. 1990), 85-110. |
Devine, R., “Design and Implementation of DDH: A Distributed Dynamic Hashing Algorithm”, In Proc. 4th Int'l Conf. on Foundations of Data Organizations and Algorithms, (Oct. 1993), 101-114. |
Doi, K. “Super Proxy Script—How to make distributed proxy servers by URL hashing,” Sharp Corp., http://naragw.sharp.co.jp/sps/, dates unknown (1996-2000), download Jul. 7, 2007. |
European Patent Application No. 02799672.7, file history as of Apr. 23, 2010 (downloaded from EPO). |
Feeley, M., et al., “Implementing Global Memory Management in a Workstation Cluster”, In Proc. 15th ACM Symp. on Operating Systems Principles, (Dec. 1995), 201-212. |
Floyd, S., et al., “A Reliable Multicast Framework for Light-Weight Sessions and Application Level Framing”, In Proc. of ACM SIGCOMM '95, 342-356, Aug. 1995. |
Fox, A., “A Framework for Separating Server Scalability and Availability from Internet Application Functionality,” PhD thesis, University of California, Berkeley, Dec. 1998. |
Fox, A., et al,, “Cluster-based scalable network services”, Proc. 16th ACM Symp. on Operating Systems Principles (Saint Malo, France, Oct. 5-8, 1997). W. M. Waite, Ed. SOSP '97. ACM Press, New York, NY, 78-91. |
Fredman, M., et al., “Storing a Sparse Table with 0(1) Worst Case Access Time”, J. ACM, vol. 31, No. 3, (Jul. 1984), 538-544. |
Gadde, S., et al., “Reduce, reuse, recycle: An approach to building large internet caches,” in Workshop on Hot Topics in Operating Systems (HotOS), Apr. 1997, pp. 93-98. |
Goldszmidt, et al., “Load Distribution for Scalable Web Servers: Summer Olympics 1996—A Case Study,” In Proc. 8th IFIP/IEEE Int'l Workshop on Distributed Systems: Operations and Management, Sydney, Australia, Oct. 1997. |
Grigni, M., et al., “Tight Bounds on Minimum Broadcasts Networks”, SIAM J. Disc. Math. 4 (May 1991), 207-222. |
Gulbrandsen, A., et al., “A DNS RR for specifying the location of services (DNS SRV)”, Network Working Group, RFC 2052, Oct. 1996. |
Guyton et al., “Locating nearby copies of replicated Internet servers,” Proc. ACM SIGCOMM '95, pp. 288-298, Oct. 1995. |
Gwertzman, J., et al., “The Case for Geographical Push-Caching”, Proc. Workshop on Hot OS '95, (May 4, 1995), 51-55. |
Gwertzman, J., et al., “World-Wide Web Cache Consistency”, Proc. 1996 USENIX Tech. Conf., pp. 141-151, San Diego, CA, Jan. 1996. |
International Preliminary Examination Report (IPER), Form PCT/IPEA/409, for PCT/US02/30921, Oct. 2005 [5 pgs.]. |
International Search Report (ISR), Form PCT/ISA/210, for PCT/US02/30921, Jan. 28, 2004 [5 pgs.]. |
Japanese Patent Office (JPO) “Notice of Reasons for Rejection”, for Japanese Patent Application No. 2003-531370, Jul. 17, 2007, “Organized Translation” provided by foreign associate [2 pgs.]. |
Japanese Patent Publication JP 10-70571,Jun. 1997, International Business Machines Corporation, English translation provided with Office Action [34 pgs.]. |
Japanese Patent Publication JP 5-130144, Published May 25, 1993, Fujitsu Ltd., English translation provided with Office Action [9 pgs.]. |
Jeffery, C., et al., “Proxy sharing proxy servers.” In Proc. IEEE etaCOM Conf., pp. 116-119, May 1996. |
Karger, D., et al., “Consistent Hashing and Random Trees: Distributed Caching Protocols for Relieving Hot Spots on the World Wide Web”, in Proc. 29th Annual ACM Symp. on Theory of Computing, (May 1997), 654-663. |
Korean Intellectual Property Office (KIPO), Notice of Preliminary Rejection (Non-Final)) for Korean Patent Application No. KR 10 2004 7004613, Apr. 24, 2009, English translation (provided by foreign associate) [5 pgs.]. |
Korkea-aho, M. (Nov. 5, 1995). Scalability in Distributed Multimedia Systems, Technical report TKO-B128, Helsinki University of Technology. |
Krishnamurthy, B. et al., Study of piggyback cache validation for proxy caches in the World Wide Web, in: Symp. on Internet Technology and Systems, USENIX Association, Dec. 1997. |
Kwan et al., NCSA's World Wide Web Server: Design and Performance, IEEE, pp. 68-74, Nov. 1995. |
Litwin, W., et al., “LH*—A Scalable, Distributed Data Structure”, ACM Trans. on Database Systems, vol. 21, No. 4, pp. 480-525, Dec. 1996. |
Luotonen et al., World-Wide Web Proxies, CERN, Apr. 1994 (modified May 24, 1994). |
Malpani, R., et al., “Making World Wide Web Caching Servers Cooperate”, in Proc. 4th Int'l. World Wide Web Conf. (Dec. 1995), 10 pages (downloaded from http://www.w3.org/Conferences/WWW4/Papers/59/ on Jul. 7, 2007). |
Mockapetris et al., “Development of the Domain Name System,” Proc. SIGCOMM '88 Computer Communications Review, vol. 18, No. 4, Aug. 1988. |
Mockapetris, P., RFC 1034: Domain Names—Concepts and Facilities, Nov. 1987. |
Mockapetris, P., RFC 1035: Domain Names—Implementation and Specification, Nov. 1987. |
Mourad et al., “Scalable Web Server Architectures,” iscc, 2nd IEEE Symposium on Computers and Communications (ISCC '97), Jul. 1997, pp. 12-16. |
Nisan, N. 1990. Pseudorandom generators for space-bounded computations. In Proc. 22nd Annual ACM Symp. on theory of Computing (Baltimore, MD, U.S., May 13-17, 1990). H. Ortiz, Ed. STOC '90. ACM Press, New York, NY, 204-212. |
Office Action dated Dec. 8, 2009 in Japanese Application No. 2001-504633 (based on PCT/US001/16732, titled “On-Demand Overlay Routing for Computer-based communication networks,” Inventor: Ricciulli, Livio) [in Japanese, 2 pgs., with 2 page English translation]. |
Office Action for Chinese (PRC) patent application No. 02821371.8, from the Patent Office of the People's Republic of China, dated Aug. 3, 2010 (non-official translation, 9 pgs., including 2 pg. summary). |
Office Action from Chinese Patent Office for application non. 02821371.8, Jan. 29, 2010, in Chinese with translation. |
Office Action from Chinese Patent Office for application non. 02821371.8, Jan. 4, 2008, in Chinese with translation. |
Office Action from Chinese Patent Office for application non. 02821371.8, May 8, 2009, in Chinese with translation. |
Oguchi et al., A Study of Caching Proxy Mechanisms Realized on Wide Area Distributed Networks, High Performance Distributed Computing, 5th Intl Symp., pp. 443-449, Aug. 1996. |
Palmer, M., et al., “Fido: A Cache that Learns to Fetch”, In Proc. the 17th Int'l Conf. on Very Large Data Bases, (Sep. 1991), 255-264. |
Panigrahy, R., “Relieving Hot Spots on the World Wide Web”, Master's thesis, MIT EECS, Jun. 1997, pp. 1-66. |
Patent Abstracts of Japan, “Server System for Internet”, Pub. No. 10-027148, pub. date Jan. 27, 1998, Applicant: Hitachi, computer translation, 12 pgs. |
Patent Abstracts of Japan, “Communication Connection Method With a Plurality of Hosts having Common Identifier”, Appln. No. JP 08-20114, Pub. No. 10-093552, Applicant Nippon Telegraph & Telephone, published Apr. 1998. |
Patent Abstracts of Japan, “Domain Name Solving Method and Domain Name Solving System”, Appln. No. JP19990223763, Pub. No. 2001-053793, Applicant Nippon Telegraph & Telephone, published Feb. 23, 2001. |
Peleg, D., et al., “The Availability of Quorum Systems”, Information and Computation, 123, (Dec. 1995), 210-223. |
Petri S., et al., “Load Balancing and Fault Tolerance in Workstation Clusters. Migrating Groups of Communicating Processes.”, Operating Systems Review, vol. 29, No. 4, Oct. 1995, pp. 25-36. |
Plaxton, G. C., et al., “Fast Fault-Tolerant Concurrent Access to Shared Objects”, In Proc. 37th IEEE Symp. of Foundations of Computer Science, (Oct. 1996), 570-579. |
Postel, RFC 1591—Domain Name System Structure and Delegation, Mar. 1994. |
Rabin, M. O., 1989, “Efficient dispersal of information for security, load balancing, and fault tolerance,” J. ACM 36, 2 (Apr. 1989), 335-348. |
Rabinovich, M. et al., “Dynamic Replication on the Internet Work Project No. 3116-17-7006”, AT&T Labs Research, Mar. 5, 1998. |
Rabinovich, M. et al., RaDaR: A Scalable Architecture for a Global Web Hosting Service, WWW8, May 1999. |
Ross, K.W., “Hash-Routing for Collections of Shared Web Caches”, IEEE Network Magazine, 11, 7:37-44, Nov.-Dec. 1997. |
Schemers, R., “Ibnamed—A load balancing name server written in Perl,” 1995 LISA IX—Sep. 17-22, 1995—Monterey, CA. |
Schuba, Christoph; “Addressing Weaknesses in the Domain Name System Protocol,” COAST Laboratory, Dept. of Computer Sciences, Purdue University; West Layfayette, IN; Aug. 1993, p. 1-87. |
Smith, “What can Archives offer the World Wide Web?”, Technical Report 11, University of Kent, Computing Laboratory, University of Kent, Canterbury, UK, Mar. 1994. |
SNMPV2 Working Group J Case SNMP Research et al: “Management Information Base for Version 2 of the Simple Network Management Protocol (SNMPv2); rfc1907.txt” IETF Standard, Internet Engineering Task Force, IETF, CH, Jan. 1996. |
Tarjan, Robert E., et al., “Storing a Sparse Table”, Commun.ACM, 22, 11, (Nov. 1979), 606-611. |
Thaler, D. G. and Ravishankar, C. V. 1998. Using name-based mappings to increase hit rates. IEEE/ACM Trans. Netw. 6, 1 (Feb. 1998), 1-14. |
U.S. P.T.O. Non-Final Office Action in U.S. Appl. No. 11/980,689, May 20, 2009. |
Vitter, J. S., et al., “Optimal Prefetching via Data Compression,” Proc. 32nd Annual IEEE Symposium on Foundations of Computer Science (Oct. 1991). |
Vixie, Paul; “Name Server Operations Guide for BIND,” Internet Software Consortium; La Honda, CA; p. SMM:10-2—SMM:10-30 (undated, 1996). |
Wessels, Intelligent Caching for World-Wide Web Objects, Masters Thesis, University of Colorado, Jan. 1995 (also presented at INET '95 in Jun. 1995). |
Yao, A. C. 1981. Should Tables Be Sorted?. J. ACM 28, 3 (Jul. 1981), 615-628. |
“Extended European Search Report dated Dec. 6, 2011”, App. No. 10178695.2, 7 pgs. |
Number | Date | Country | |
---|---|---|---|
20110145386 A1 | Jun 2011 | US |
Number | Date | Country | |
---|---|---|---|
60325177 | Sep 2001 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 11976648 | Oct 2007 | US |
Child | 12978537 | US | |
Parent | 10259497 | US | |
Child | 11932162 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10259497 | Sep 2002 | US |
Child | 11976648 | US | |
Parent | 11932162 | Oct 2007 | US |
Child | 12978537 | US |