1. Field of the Invention
The disclosure invention relates generally to load balancing among servers. More particularly but not exclusively, the present disclosure relates to achieving load balancing by, in response to resolving a DNS query by a client, providing the address of a server that is expected to serve the client with a high performance in a given application.
2. Description of the Related Art
Under the TCP/IP protocol, when a client provides a symbolic name (“URL”) to request access to an application program or another type of resource, the host name portion of the URL needs to be resolved into an IP address of a server for that application program or resource. For example, the URL (e.g., http://www.foundrynet.com/index.htm) includes a host name portion www.foundrynet.com that needs to be resolved into an IP address. The host name portion is first provided by the client to a local name resolver, which then queries a local DNS server to obtain a corresponding IP address. If a corresponding IP address is not locally cached at the time of the query, or if the “time-to-live” (TTL) of a corresponding IP address cached locally has expired, the DNS server then acts as a resolver and dispatches a recursive query to another DNS server. This process is repeated until an authoritative DNS server for the domain (e.g., foundrynet.com, in this example) is reached. The authoritative DNS server returns one or more IP addresses, each corresponding to an address at which a server hosting the application (“host server”) under the host name can be reached. These IP addresses are propagated back via the local DNS server to the original resolver. The application at the client then uses one of the IP addresses to establish a TCP connection with the corresponding host server. Each DNS server caches the list of IP addresses received from the authoritative DNS for responding to future queries regarding the same host name, until the TTL of the IP addresses expires.
To provide some load sharing among the host servers, many authoritative DNS servers use a simple round-robin algorithm to rotate the IP addresses in a list of responsive IP addresses, so as to distribute equally the requests for access among the host servers.
The conventional method described above for resolving a host name to its IP addresses has several shortcomings. First, the authoritative DNS does not detect a server that is down. Consequently, the authoritative DNS server continues to return a disabled host server's IP address until an external agent updates the authoritative DNS server's resource records. Second, when providing its list of IP addresses, the authoritative DNS sever does not take into consideration the host servers' locations relative to the client. The geographical distance between the server and a client is a factor affecting the response time for the client's access to the host server. For example, traffic conditions being equal, a client from Japan could receive better response time from a host server in Japan than from a host server in New York. Further, the conventional DNS algorithm allows invalid IP addresses (e.g., that corresponding to a downed server) to persist in a local DNS server until the TTL for the invalid IP address expires.
One aspect of the present invention provides an improved method and system for serving IP addresses to a client, based on a selected set of performance metrics. In accordance with this invention, a global server load-balancing (GSLB) switch is provided as a proxy for an authoritative DNS server, together with one or more site switches each associated with one or more host servers. Both the GSLB switch and the site switch can be implemented using the same type of switch hardware in one embodiment. Each site switch provides the GSLB switch with current site-specific information regarding the host servers associated with the site switch. Under one aspect of the present invention, when an authoritative DNS server resolves a host name in a query and returns one or more IP addresses, the GSLB switch filters the IP addresses using the performance metrics compiled from the site-specific information collected from the site switches. The GSLB switch then returns a ranked or weighted list of IP addresses to the inquirer. In one embodiment, the IP address that is estimated to provide the best-expected performance for the client is placed at the top of the list.
Examples of suitable performance metrics include availability metrics (e.g., a server's or an application's health), load metrics (e.g., a site switch's session capacity or a corresponding preset threshold), and proximity metrics (e.g., a round-trip time between the site switch and a requesting DNS server, the geographic location of the host server, the topological distance between the host server and the client program). (A topological distance is the number of hops between the server and the client). Another proximity metrics is the site switch's “flashback” speed (i.e., how quickly a switch receives a health check result). Yet another metric is a connection-load metric that is based on a measure of new connections-per-second at a site. The ordered list can also be governed by other policies, such as the least selected host server.
The present invention is better understood upon consideration of the detailed description of the embodiments below, in conjunction with the accompanying drawings.
Embodiments for global server load-balancing are described herein. In the following description, numerous specific details are given to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention can be practiced without one or more of the specific details, or with other methods, components, materials, etc. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.
Reference throughout this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
In the remainder of this detailed description, for the purpose of illustrating embodiments of the present invention only, the list of IP addresses returned are assumed to be the virtual IP addresses configured on the proxy servers at switches 18A, 18B, 22A and 22B (sites 20 and 24). In one embodiment, GSLB switch 12 determines which site switch would provide the best expected performance (e.g., response time) for client 28 and returns the IP address list with a virtual IP address configured at that site switch placed at the top. (Within the scope of the present invention, other forms of ranking or weighting the IP addresses in the list can also be possible.) Client program 28 can receive the ordered list of IP addresses, and typically selects the first IP address on the list to access the corresponding host server.
For example for a connection-load metric in one embodiment, site-specific metric agent(s) 407 can perform sampling to obtain connections-per-second at their respective site, and then obtains load averages from the samples or performs other calculations. The site-specific metric collector 406 of the GLSB switch 12 then obtains the load averages from the site-specific metric agent(s) 407 and provides these load averages to the switch controller 401, to allow the switch controller 401 to use the load averages to rank the IP addresses on the ordered list. Alternatively or in addition to the site-specific metric agent(s) 407, the switch controller 401 can perform at least some or most of the connection-load calculations from sampling data provided by the site-specific metric agent(s) 407.
Routing metric collector 405 collects routing information from routers (e.g., topological distances between nodes on the Internet).
In one embodiment, the metrics used in a GSLB switch 12 includes (a) the health of each host server and selected applications, (b) each site switch's session capacity threshold, (c) the round trip time (RTT) between a site switch and a client in a previous access, (d) the geographical location of a host server, (e) the connection-load measure of new connections-per-second at a site switch, (f) the current available session capacity in each site switch, (g) the “flashback” speed between each site switch and the GSLB switch (i.e., how quickly each site switch responds to a health check from the GSLB switch), and (h) a policy called the “Least Response Selection” (LRS) which prefers the site least selected previously. Many of these performance metrics can be provided default values. Each individual metric can be used in any order and each metric can be disabled. In one embodiment, the LRS metric is always enabled.
As shown in
After act 100, if the list of candidate IP addresses for the best site has multiple IP addresses, it is further assessed in act 102 based upon the capacity threshold of the site switch serving that IP address. Each site switch may have a different maximum number of TCP sessions it can serve. For example, the default number for the “ServerIron” product of Foundry Network is one million sessions, although it can be configured to a lower number. The virtual IP address configured at site switch 18B may be disqualified from being the “best” IP address if the number of sessions for switch 18B exceed a predetermined threshold percentage (e.g., 90%) of the maximum number of sessions. (Of course, the threshold value of 90% of the maximum capacity can be changed.) After act 102, if the list of IP addresses has only one IP address (act 103), the list of IP addresses is returned to client program 28 at act 108.
After act 102, if the IP address list has multiple IP addresses (act 103), the remaining IP addresses on the list can then be reordered in act 104 based upon a round-trip time (RTT) between the site switch for the IP address (e.g., site switch 18B) and the client (e.g., client 28). The RTT is computed for the interval between the time when a client machine requests a TCP connection to a proxy server configured on a site switch, sending the proxy server a TCP SYN packet, and the time a site switch receives from the client program a TCP ACK packet. (In response to the TCP SYN packet, a host server sends a TCP SYN ACK packet, to indicate acceptance of a TCP connection; the client machine returns a TCP ACK packet to complete the setting up of the TCP connection.) The GSLB switch (e.g., GSLB switch 12) maintains a database of RTT, which it creates and updates from data received periodically from the site switches (e.g., site switches 18A, 18B, 22A and 22B). Each site collects and stores RTT data for each TCP connection established with a client machine. In one embodiment, the GSLB switch favors one host server over another only if the difference in their RTTs with a client machine is greater than a specified percentage, the default specified percentage value being 10%, for example. To prevent bias, the GSLB switch ignores, by default, RTT values for 5% of client queries from each responding network, for example. After act 105, if the top entries on the list of IP addresses do not have equal RTTs, the list of IP addresses is returned to client program 28 at act 108.
If multiple sites have equal RTTs (act 105), then the list is reordered in act 106 based upon the location (geography) of the host server. The geographic location of a server is determined according to whether the IP address is a real address or a virtual IP address (“VIP”). For a real IP address, the geographical region for the host server can be determined from the IP address itself. Under IANA, regional registries RIPE (Europe), APNIC (Asia/Pacific Rim) and ARIN (the Americas and Africa) are each assigned different prefix blocks. In one embodiment, an IP address administered by one of these regional registries is assumed to correspond to a machine located inside the geographical area administered by the regional registry. For a VIP, the geographic region is determined from the management IP address of the corresponding site switch. Of course, a geographical region can be prescribed for any IP address to override the geographic region determined from the procedure above. The GSLB switch prefers an IP address that is in the same geographical region as the client machine in an embodiment. At act 107, if the top two entries on the IP list are not equally ranked, the IP list is sent to the client program 28 at act 108.
After act 107, if multiple sites are of equal rank for the best site, the IP addresses can then be reordered based upon site connection load (act 114). The connection-load metric feature allows comparison of sites based on the connection-load on their respective agent (e.g., at the metric agent 407 of the site ServerIron switch 18A in
The connection-load is a measure of new connections-per-second on the agent 407 in one embodiment. An administrator can set a threshold limit for the connection-load to pass a given site; can select the number of load sampling intervals and duration of each interval; and can select the relative weight for each interval to calculate the average load for a period of time (i.e., new connections per the period of time).
The “connection load limit” value specifies the load limit for any site to pass the metric. The minimum value is 1, and a parser or other software component in the site switch 18A, for instance, limits the maximum value—there need not be a default value. By default, this connection-load metric is turned off and can be turned on when the load limit is specified. The average load for a given site is calculated using the user-defined weights and intervals, which will be explained later below. If the calculated average load is less than the load limit specified, the site is passed on to the next stage of the GSLB algorithm described herein—otherwise that site is eliminated/rejected from the set of potential candidates.
In one embodiment, the number of “load sampling intervals” and also the “sampling rate” can be configured. The sampling rate defines the duration of each sampling interval in multiples of the initial rate. For example, if 6 sampling intervals and a sampling rate of 5 seconds are chosen, the site will sample the average load at 5, 10, 15, 20, 25, and 30. At any instant, the site will have the average load for the previous 5 seconds, 10 seconds, 15 seconds, 20 seconds, 25 seconds, and 30 seconds. This is a “moving average” in that at the 35th second, for example, the average for the 5th to 35th seconds is calculated. Note that even though this is a moving average, the accuracy is limited by the initial sampling rate, meaning that since samples are taken after every 5 seconds, at the 7th second, the average for the 1st to 5th second is available and not the 2nd to 7th second average.
The sampling rate also defines the update interval for the site (e.g., the site-specific metric agent 407) to upload the load averages to the metric collector 406 at the GSLB switch 12. A given site is capable of maintaining load-averages for any number of collectors at a time. Each collector is updated with the load information periodically, and the update interval is also specific to the collector in various example embodiments.
The minimum number of intervals is 1 and the max is 8 in one embodiment. The default number is 5, which is set when the connection load limit is configured. It is appreciated that these are merely illustrative examples and may be different based on the particular implementation.
For the load-sampling interval, the minimum value is 1 second and maximum value is 60 seconds. The default value is 5 seconds. So, the maximum range for load average calculation is 60*8 seconds=480 seconds=8 minutes. Thus, one can consider up to the previous 8-minute average for load analysis. Again, these are example settings.
Weights can be assigned to each interval to calculate the average load. By default in one embodiment, each interval is given an equal weight of 1. The average load for a site can be calculated using the following formula:
where N=Number of sampling intervals and AvgLoad of interval i=new connections of interval i.
The contribution of any interval can be nullified by giving it a weight of zero. If every interval is given a weight of zero, the average load is zero. (We cannot divide by zero). In one embodiment, the site-specific metric agent 407 can calculate this average load and provide it to the metric collector 406 at the GSLB switch 12. In other embodiments, the metric collector 406 and/or the switch controller 401 can perform the average load calculation based on values collected and provided by the site-specific metric agent 407.
By default, the connection-load metric is not turned on in the GSLB algorithm. The metric is automatically turned on when the user specifies the connection-load limit, in an embodiment. The specific configuration needs for connection-load sampling and calculation can be configured on the switch controller 401, whether the switch 12 is used for GSLB or as a site-specific switch.
To configure the connection load limit (such as a connection load limit of 500), at the GSLB policy configuration level, the following example command can be used:
SW-GSLB-Controller (config-gslb-policy) #connection-load limit 500
Again, as described above, if the calculated average load is less than this limit, then the site is kept as a potential candidate.
To configure the number of sampling intervals and the sampling rate (e.g., sampling rate=5, interval=6), the following example command may be used:
SW-GSLB-Controller (config-gslb-policy) #connection-load intervals 6 5
To configure the interval weights, the following example command can be used:
SW-GSLB-Controller (config-gslb-policy) #connection-load weights 1 2 3 4 5 6
The syntax of this command is:
connection-load weights<weight of interval−1><weight of interval−2><weight of interval−3> . . . up to 8, for example.
All weights for all intervals need not be configured if not considering beyond a certain point. The configured weights will be assigned to intervals starting from the first and any non-configured interval will be assigned a weight of zero. For example, if only the 5-second average is desired, the following can be used:
SW-GSLB-Controller (config-gslb-policy) #connection-load intervals 6 5
SW-GSLB-Controller (config-gslb-policy) #connection-load weights 1
Thus, even though 6 intervals are configured in the above example, all the others are nullified due to zero weights.
By default the connection-load metric is not included in the GSLB algorithm. Once the connection-load limit is configured, the metric is included after the geographic-location metric in the metric order according to one embodiment, such as shown in
At act 115, if there are no multiple candidates at the top of the IP list that have passed the connection-load metric (or there are none of equal rank), then the IP address list is sent to the client program 28 at act 108. After act 115, if multiple sites are of equal rank for the best site, the IP addresses can then be reordered based upon available session capacity (act 109). For example in one embodiment, if switch 18A has 1,000,000 sessions available and switch 22B has 800,000 sessions available, switch 18A is then preferred, if a tolerance limit, representing the difference in sessions available expressed as a percentage of capacity in the larger switch, is exceeded. For example, if the tolerance limit is 10%, switch 18A will have to have at a minimum 100,000 more sessions available than switch 22B to be preferred. If an IP address is preferred (act 110), the IP address will be placed at the top of the IP address list, and is then returned to the requesting entity at act 108. Otherwise, if the session capacity does not resolve the best IP address, act 111 then attempts to a resolution based upon a “flashback” speed. The flashback speed is a time required for a site switch to respond to layers 4 and 7 health checks by the GSLB switch. The flashback speed is thus a measure of the load on the host server. Again, the preferred IP address will correspond to a flashback speed exceeding the next one by a preset tolerance limit.
In one embodiment, flashback speeds are measured for well-known applications (layer 7) and their corresponding TCP ports (layer 4). For other applications, flashback speeds are measured for user selected TCP ports. Layer 7 (application-level) flashback speeds are compared first, if applicable. If the application flashbacks fail to provide a best IP address, layer 4 flashback speeds are compared. If a host server is associated with multiple applications, the GSLB switch selects the slowest response time among the applications for the comparison. At act 112, if a best IP address is resolved, the IP address list is sent to client program 28 at act 108. Otherwise, at act 113, an IP address in the site that is least often selected to be the “best” site is chosen. The IP address list is then sent to client program 28 (act 108).
Upon receipt of the IP address list, the client program 28 uses the best IP address selected (i.e., the top of the list) to establish a TCP connection with a host server. Even then, if there is a sudden traffic surge that causes a host server to be overloaded, or if the host servers or the applications at the site become unavailable in the mean time, the site switch can redirect the TCP connection request to another IP address using, for example, an existing HTTP redirection procedure.
To provide an RTT under an embodiment of the present invention described above, at the first time a client accesses an IP address, a site switch (e.g., site switch 22A of
All of the above U.S. patents, U.S. patent application publications, U.S. patent applications, foreign patents, foreign patent applications and non-patent publications referred to in this specification and/or listed in the Application Data Sheet, are incorporated herein by reference, in their entirety.
The above description of illustrated embodiments of the invention, including what is described in the Abstract, is not intended to be exhaustive or to limit the invention to the precise forms disclosed. While specific embodiments of, and examples for, the invention are described herein for illustrative purposes, various equivalent modifications are possible within the scope of the invention and can be made without deviating from the spirit and scope of the invention.
These and other modifications can be made to the invention in light of the above detailed description. The terms used in the following claims should not be construed to limit the invention to the specific embodiments disclosed in the specification and the claims. Rather, the scope of the invention is to be determined entirely by the following claims, which are to be construed in accordance with established doctrines of claim interpretation.
The present application is a continuation-in-part of U.S. application Ser. No. 09/670,487, entitled “GLOBAL SERVER LOAD BALANCING,” filed Sep. 26, 2000, assigned to the same assignee as the present application, and which is incorporated herein by reference its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5031094 | Toegel et al. | Jul 1991 | A |
5359593 | Derby et al. | Oct 1994 | A |
5948061 | Merriman et al. | Sep 1999 | A |
5951634 | Sitbon et al. | Sep 1999 | A |
6092178 | Jindal et al. | Jul 2000 | A |
6112239 | Kenner et al. | Aug 2000 | A |
6115752 | Chauhan | Sep 2000 | A |
6128279 | O'Neil et al. | Oct 2000 | A |
6128642 | Doraswamy et al. | Oct 2000 | A |
6167446 | Lister et al. | Dec 2000 | A |
6182139 | Brendel | Jan 2001 | B1 |
6233604 | Van Horne et al. | May 2001 | B1 |
6286039 | Van Horne et al. | Sep 2001 | B1 |
6286047 | Ramanathan et al. | Sep 2001 | B1 |
6324580 | Jindal et al. | Nov 2001 | B1 |
6381627 | Kwan et al. | Apr 2002 | B1 |
6389462 | Cohen et al. | May 2002 | B1 |
6427170 | Sitaraman et al. | Jul 2002 | B1 |
6438652 | Jordan et al. | Aug 2002 | B1 |
6449657 | Stanbach et al. | Sep 2002 | B2 |
6470389 | Chung et al. | Oct 2002 | B1 |
6480508 | Mwikalo et al. | Nov 2002 | B1 |
6549944 | Weinberg et al. | Apr 2003 | B1 |
6578066 | Logan et al. | Jun 2003 | B1 |
6606643 | Emens et al. | Aug 2003 | B1 |
6681232 | Sistanizadeh et al. | Jan 2004 | B1 |
6681323 | Fontanesi et al. | Jan 2004 | B1 |
6745241 | French et al. | Jun 2004 | B1 |
6789125 | Aviani et al. | Sep 2004 | B1 |
6826198 | Turina et al. | Nov 2004 | B2 |
6850984 | Kalkunte et al. | Feb 2005 | B1 |
6879995 | Chinta et al. | Apr 2005 | B1 |
6898633 | Lyndersay et al. | May 2005 | B1 |
6963914 | Breitbart et al. | Nov 2005 | B1 |
6963917 | Callis et al. | Nov 2005 | B1 |
6987763 | Rochberger et al. | Jan 2006 | B2 |
6996615 | McGuire | Feb 2006 | B1 |
7000007 | Valenti | Feb 2006 | B1 |
7020698 | Andrews et al. | Mar 2006 | B2 |
7032010 | Swildens et al. | Apr 2006 | B1 |
7086061 | Joshi et al. | Aug 2006 | B1 |
20010052016 | Skene et al. | Dec 2001 | A1 |
20020026551 | Kamimaki et al. | Feb 2002 | A1 |
20020038360 | Andrews et al. | Mar 2002 | A1 |
20020062372 | Hong et al. | May 2002 | A1 |
20020078233 | Biliris et al. | Jun 2002 | A1 |
20020091840 | Pulier et al. | Jul 2002 | A1 |
20020112036 | Bohannon et al. | Aug 2002 | A1 |
20020120743 | Shabtay et al. | Aug 2002 | A1 |
20020133601 | Kennamer et al. | Sep 2002 | A1 |
20020188862 | Trethewey et al. | Dec 2002 | A1 |
20020194335 | Maynard | Dec 2002 | A1 |
20030035430 | Islam et al. | Feb 2003 | A1 |
20030065711 | Acharya et al. | Apr 2003 | A1 |
20030065763 | Swildens et al. | Apr 2003 | A1 |
20030105797 | Dolev et al. | Jun 2003 | A1 |
20030135509 | Davis et al. | Jul 2003 | A1 |
20030154239 | Davis et al. | Aug 2003 | A1 |
20030210686 | Terrell et al. | Nov 2003 | A1 |
20030210694 | Jayaraman et al. | Nov 2003 | A1 |
20040024872 | Kelley et al. | Feb 2004 | A1 |
20050021883 | Shishizuka et al. | Jan 2005 | A1 |
20050033858 | Swildens et al. | Feb 2005 | A1 |
20050086295 | Cunningham et al. | Apr 2005 | A1 |
20050149531 | Srivastava | Jul 2005 | A1 |
Number | Date | Country | |
---|---|---|---|
Parent | 09670487 | Sep 2000 | US |
Child | 10206580 | US |