The present invention relates to computer networks in general, and in particular to load balancing client requests among redundant network servers in different geographical locations.
In computer networks, such as the Internet, preventing a server from becoming overloaded with requests from clients may be accomplished by providing several servers having redundant capabilities and managing the distribution of client requests among the servers through a process known as “load balancing”.
In one early implementation of load balancing, a Domain Naming System (DNS) server connected to the Internet is configured to maintain several IP addresses for a single domain name, with each address corresponding to one of several servers having redundant capabilities. The DNS server receives a request for address translation and responds by returning the list of server addresses from which the client chooses one address at random to connect to. Alternatively, the DNS server returns a single address chosen either at random or in a round-robin fashion, or actively monitors each of the servers and returns a single address based on server load and availability.
More recently, a device known as a “load balancer,” such as the Web Server Director, commercially available from the Applicant/assignee, has been used to balance server loads as follows. The load balancer is provided as a gateway to several redundant servers typically situated in a single geographical location and referred to as a “server farm” or “server cluster.” DNS servers store the IP address of the load balancer rather than the addresses of the servers to which the load balancer is connected. The load balancer's address is referred to as a “virtual IP address” in that it masks the addresses of the servers to which it is connected. Client requests are addressed to the virtual IP address of the load balancer which then sends the request to a server based on server load and availability or using other known techniques.
Just as redundant servers in combination with a load balancer may be used to prevent server overload, redundant server farms may be used to reroute client requests received at a first load balancer/server farm to a second load balancer/server farm where none of the servers in the first server farm are available to tend to the request. One rerouting method currently being used involves sending an HTTP redirect message from the first load balancer/server farm to the client instructing the client to reroute the request to the second load balancer/server farm indicated in the redirect message. This method of load balancing is disadvantageous in that it can only be employed in response to HTTP requests, and not for other types of requests such as FTP requests. Another rerouting method involves configuring the first load balancer to act as a DNS server. Upon receiving a DNS request the first load balancer simply returns the virtual IP address of the second load balancer. This method of load balancing is disadvantageous in that it can only be employed in response to DNS requests where there is no guarantee that the request will come to the first load balancer since the request does not come directly from the client, and where subsequent requests to intermediate DNS servers may result in a previously cached response being returned with a virtual IP address of a load balancer that is no longer available.
Where redundant server farms are situated in more than one geographical location, the geographical location of a client may be considered when determining the load balancer to which the client's requests should be routed, in addition to employing conventional load balancing techniques. However, routing client requests to the geographically nearest server, load balancer, or server farm might not necessarily provide the client with the best service if, for example, routing the request to a geographically more distant location would otherwise result in reduced latency, fewer hops, or provide more processing capacity at the server.
Certain embodiment disclosed herein include a method for load balancing client requests among a plurality of internet service provider (ISP) links in a multi-homed network. The method comprises resolving an incoming domain name server (DNS) query for an address associated with a domain name of a server within the multi-homed network, wherein the incoming DNS query is received from a client;
selecting, based on at least one load balancing criterion, one ISP link from the plurality ISP links; and returning an internet protocol (IP) address selected from a range of IP addresses associated with the selected ISP link, thereby subsequent requests from the client are routed through the selected ISP link.
Certain embodiment disclosed herein also include a device for load balancing client requests among a plurality of internet service provider (ISP) links in a multi-homed network. The devices comprises a network controller configured to resolve an incoming domain name server (DNS) query for an address associated with a domain name of a server within the multi-homed network, wherein the incoming DNS query is received from a client; and a balancer module configured to select, based on at least one load balancing criterion, one ISP link from the plurality ISP links; wherein the network controller is further configured to return an internet protocol (IP) address selected from a range of IP addresses associated with the selected ISP link, thereby subsequent requests from the client are routed through the selected ISP link.
Certain embodiment disclosed herein also include a method for load balancing traffic among a plurality of data paths between a client and a destination server connected through a network system, each of the plurality of data paths is connected to a router configured with a unique internet protocol (IP) address. The method comprises receiving a data packet from a client; selecting a data path to route the received data packet to the destination server using a decision function, wherein the decision function is based at least on a content type of the received data packet; and routing the received data packet and subsequent data packets from the client to the destination server over the selected data path.
Certain embodiment disclosed herein also include a content router for load balancing traffic among a plurality of data paths between a client and a destination server connected through a network system, each of the plurality of data paths is connected to a router configured with a unique internet protocol (IP) address. The content router comprises an interface for receiving a data packet from a client; a memory for maintaining a decision table summarizing decision functions computed for the destination server for each type of content; and a load balancing module for selecting a data path to route the received data packet to the destination server using a decision function, wherein the decision function is based at least on a content type of the received data packet; wherein the interface is further configured to route the received data packet and subsequent data packets from the client to the destination server over the selected data path.
The present invention will be understood and appreciated from the following detailed description, taken in conjunction with the drawings in which:
Reference is now made to
Typical operation of the triangulation load balancing system of
LB2 is preferably capable of having multiple virtual IP addresses as is well known. It is a particular feature of the present invention for LB2 to designate a currently unused virtual IP address, such as 200.100.1.1, for LBI's use and store the mapping between the IP address of LB1 and the designated IP address in a triangulation mapping table 32, as is shown more particularly with reference to
As shown in the example of
Reference is now made to
Typical operation of the network proximity load balancing system of
Upon receiving a request, LB1 may decide to service the request or not based on normal load balancing considerations. In any case, LB1 may check proximity table 54 for an entry indicating the subnet corresponding to the subnet of the source IP address of the incoming request. As is shown more particularly with reference to
A “network proximity” may be determined for a requestor such as client 26 with respect to each load balancer/server farm by measuring and collectively considering various attributes of the relationship such as latency, hops between client 26 and each server farm, and the processing capacity and quality of each server farm site. To determine comparative network proximity, LB1, LB2 and LB3 preferably each send a polling request 58 to client 26 using known polling mechanisms. While known polling mechanisms included pinging client 26, sending a TCP ACK message to client 26 may be used where pinging would otherwise fail due to an intervening firewall or NAT device filtering out a polling message. A TCP ACK may be sent to the client's source IP address and port. If the client's request was via a UDP connection, a TCP ACK to the client's source IP address and port 80 may be used. One or both TCP ACK messages should bypass any intervening NAT or firewall and cause client 26 to send a TCP RST message, which may be used to determine both latency and TTL. While TTL does not necessarily indicate the number of hops from the client to the load balancer, comparing TTL values from LBI, LB2, and LB3 should indicate whether it took relatively more or less hops.
Another polling method involves sending a UDP request to a relatively high port number at the client, such as 2090. This request would typically be answered with an “ICMP port unreachable” reply which would indicate the TTL value of the UDP request on arrival at the client. Since the starting TTL value of each outgoing UDP request is known, the actual number of hops to the client may be determined by subtracting the TTL value on arrival at the client from the starting TTL value. A combination of pinging, TCP ACK, UDP, TCP SYN, and other polling techniques may be used since any one polling request might fail.
Client 26 is shown in
As was described above, a load balancer that receives a request from a client may check proximity table 54 for an entry indicating the subnet corresponding to the subnet of the source IP address of the incoming request. Thus, if a corresponding entry is found in proximity table 54, the request is simply routed to the location having the best network proximity. Although the location having the best network proximity to a particular subnet may have already been determined, the load balancer may nevertheless decide to forward an incoming request to a location that does not have the best network proximity should a load report received from the best location indicate that the location is too busy to receive requests. In addition, the best network proximity to a particular subnet may be periodically redetermined, such as at fixed times or after a predetermined amount of time has elapsed from the time the last determination was made.
As is shown more particularly with reference to
The present invention can also be used in a multi-homing environment; i.e., for management of networks that have multiple connections to the Internet through multiple Internet Service Providers (ISPs).
Reference is now made to
As illustrated in
As illustrated in
Based on these polling results, content router 145 chooses, for example, router 135 as its first choice for connecting client 105 with server 150. As illustrated in
In turn, as illustrated in
As illustrated in
Reference is now made to
It can be seen from
Referring back to
Similarly, referring back to
Reference is now made to
The path quality factor Qi is defined as:
Path Quality Factor Qi=Q(traffic load; packet loss; link pricing)
The path quality factor, for a given path, is typically dependent on the data content of the data packet. Typical path quality weighting factors are shown in Table 1 for the listed data content. It is appreciated that path quality factor is typically checked periodically, by the content router 508, for each Internet path.
It is appreciated that the managing of the routing by the content router 508, typically depends on the following factors: the content type, the number of hops to the destination, the response time of the destination, the availability of the path, the costing of the link and the average packet loss in the link.
In order for the content router 508 to determine the “best” path, a “Decision Parameter Table” is built for each content type. It is appreciated that the content type may vary between the application type and actual content (URL requested, or any other attribute in the packet). The Decision Parameter Table is preferably dependent on the parameters: Data packet content; Hops weighting factor; Packet loss factor and Response time factor. Typical values of these parameters are also given in Table 1.
In addition to the parameters listed in Table 1, the following additional parameters may also be taken into consideration Hops count factor; Response time factor, Path quality factor; and Packet loss factor.
A Destination Table is built to summarize the following factors: the content type, the number of hops to the destination, the response time of the destination, the availability of the path, and the average packet loss in the link, based on proximity calculations, as previously defined.
Using the relevant data, as typically listed in Table 1, the content router 508 determines a Decision Function Fcontent for each path:
Fcontent=F(Hops weighting factor*Hops count factor; Response weighting factor*Response time factor, Path quality weighting factor*Path quality factor; Packet loss weighting factor*Packet loss factor).
It is appreciated that the above parameters, which are used in the calculation of Fcontent, are typically normalized for each path.
Based on the Decision Function the content router 508 selects one of the available paths. The data packet is then routed through the selected path. The Decision Function for a particular path is determined by an administrative manager (not shown) and may depend, for example, on the minimum number of hops or on the relevant response time, or on the packet loss, or on the path quality, or any combination of the above parameters, according to the administrative preferences.
The operation of the content router 508 is summarized in the flowchart 600 illustrated in
If the destination 504 is unfamiliar, the content router 508 performs a destination check (step 610). The destination check is performed by using the proximity methods, as described hereinabove, by generating actual web traffic towards the destination subnet. This function, as carried out by the content router 508 comprises building a Destination Table (
Thus it may be appreciated that the present invention enables a multi-homed network architecture to realize the full benefits of its redundant route connections by maintaining fault tolerance and by balancing the load among these connections, and preferably using data packet content information in an intelligent decision making process.
It is appreciated that elements of the present invention described hereinabove may be implemented in hardware, software, or any suitable combination thereof using conventional techniques.
It is appreciated that the steps described with reference to
It is appreciated, that various features of the invention which are, for clarity, described in the contexts of separate embodiments may also be provided in combination in a single embodiment. Conversely, various features of the invention which are, for brevity, described in the context of a single embodiment may also be provided separately or in any suitable subcombination.
It will be appreciated by persons skilled in the art that the present invention is not limited to what has been particularly shown and described hereinabove. Rather, the scope of the present invention is defined only by the claims that follow:
This application is a continuation of application Ser. No. 10/449,016 filed Jun. 2, 2003, now U.S. Pat. No. 8,266,319. The 10/449,016 Application is a divisional of application Ser. No. 09/467,763 filed Dec. 20, 1999, now U.S. Pat. No. 6,665,702, which is a continuation-in-part of application Ser. No. 09/115,643, filed Jul. 15, 1998, now U.S. Pat. No. 6,249,801. The contents of the above-referenced applications are herein incorporated by reference.
Number | Name | Date | Kind |
---|---|---|---|
4495570 | Kitajima et al. | Jan 1985 | A |
4884263 | Suzuki | Nov 1989 | A |
4953162 | Lyons et al. | Aug 1990 | A |
5349682 | Rosenberry | Sep 1994 | A |
5491786 | Egan et al. | Feb 1996 | A |
5511168 | Perlman et al. | Apr 1996 | A |
5774660 | Brendel et al. | Jun 1998 | A |
5805586 | Perreault et al. | Sep 1998 | A |
5884038 | Kapoor | Mar 1999 | A |
5915095 | Miskowiec | Jun 1999 | A |
5951634 | Sitbon et al. | Sep 1999 | A |
6006264 | Colby et al. | Dec 1999 | A |
6038599 | Black et al. | Mar 2000 | A |
6047329 | Horikawa et al. | Apr 2000 | A |
6067545 | Wolff | May 2000 | A |
6070191 | Narendran et al. | May 2000 | A |
6078943 | Yu | Jun 2000 | A |
6078953 | Vaid et al. | Jun 2000 | A |
6092178 | Jindal et al. | Jul 2000 | A |
6098091 | Kisor | Aug 2000 | A |
6098108 | Sridhar et al. | Aug 2000 | A |
6115752 | Chauhan | Sep 2000 | A |
6119170 | Schoffelman et al. | Sep 2000 | A |
6122743 | Shaffer et al. | Sep 2000 | A |
6138159 | Phaal | Oct 2000 | A |
6167438 | Yates | Dec 2000 | A |
6182139 | Brendel | Jan 2001 | B1 |
6185619 | Joffe et al. | Feb 2001 | B1 |
6205146 | Rochberger et al. | Mar 2001 | B1 |
6249800 | Aman et al. | Jun 2001 | B1 |
6249801 | Zisapel et al. | Jun 2001 | B1 |
6269391 | Gillespie | Jul 2001 | B1 |
6297823 | Bharali et al. | Oct 2001 | B1 |
6298383 | Gutman et al. | Oct 2001 | B1 |
6304913 | Rune | Oct 2001 | B1 |
6314093 | Mann et al. | Nov 2001 | B1 |
6347078 | Narvaez-Guarieri et al. | Feb 2002 | B1 |
6370584 | Bestavros et al. | Apr 2002 | B1 |
6449647 | Colby et al. | Sep 2002 | B1 |
6457054 | Bakshi | Sep 2002 | B1 |
6487177 | Weston-Dawkes | Nov 2002 | B1 |
6502125 | Kenner et al. | Dec 2002 | B1 |
6502135 | Munger et al. | Dec 2002 | B1 |
6542468 | Hatakeyama | Apr 2003 | B1 |
6549516 | Albert et al. | Apr 2003 | B1 |
6601084 | Bhaskaran et al. | Jul 2003 | B1 |
6618761 | Munger et al. | Sep 2003 | B2 |
6650621 | Maki-Kullas | Nov 2003 | B1 |
6665702 | Zisapel et al. | Dec 2003 | B1 |
6680947 | Denecheau et al. | Jan 2004 | B1 |
6687731 | Kavak | Feb 2004 | B1 |
6718359 | Zisapel et al. | Apr 2004 | B2 |
6735631 | Oehrke et al. | May 2004 | B1 |
6754181 | Elliott et al. | Jun 2004 | B1 |
8266319 | Zisapel et al. | Sep 2012 | B2 |
20020087722 | Datta et al. | Jul 2002 | A1 |
20020199014 | Yang et al. | Dec 2002 | A1 |
20030140142 | Marples et al. | Jul 2003 | A1 |
Number | Date | Country |
---|---|---|
2000-311130 | Nov 2000 | JP |
0141362 | Jun 2001 | WO |
Entry |
---|
Samrat Bhattacharjee, et al., “Application Layer Anycasting”; Networking and Telecommunications Group, College of Computing, Georgia Institute of Technology, Atlanta, GA.; INFOCOM '97; Apr. 1997. |
German Goldszmidt, et al.; “Load Distribution for Scalable Web Servers: Summer Olympics 1996—a Case Study”; IBM Watson Research Center; 1996. |
Mari Korkea-aho; “Scalability in Distributed Multimedia Systems”; Helsinki University of Technology, Laboratory of Information Processing Science; Master's Thesis: Nov. 5, 1995. |
Srinivasan Seshan et al.; “SPAND: Shared Passive Network Performance Discovery”; USENIX Symposium on Internet Technologies and Systems; 1997. |
Robert L. Carter et al.; “Dynamic Server Selection using Bandwidth Probing in Wide Area Networks”; Computer Science Department, Boston University; Mar. 18, 1996. |
Robert L. Carter et al.; “Measuriung Bottleneck Link Speed in Packet-Switched Networks”; Computer Science Department, Boston University; Mar. 15, 1996. |
James D. Guyton et al.; “Locating Nearby Copies of Replicated Internet Servers”; University of Colorado at Boulder; Feb. 1995. |
Fyodor; “The Art of Port Scanning”; Sep. 1997. |
R. Enger et al.; “FYI on a Network Management Tool Catalog: Tools for Monitoring and Debugging TCP/IP Internets and Interconnected Devices”; Network Working Group; Jun. 1993. |
Matt Mathis et al.; “Diagnosing Internet Congestion with a Transport Layer Performance Tool”; Proceedings of INET; 1996. |
Cisco Distributed Director. Cisco Systems, Inc. 1996. |
Deborah Radcliff; “Traffic Jam—includes related articles on long-distance telco revenues, spamming, an emerging high-quality internet2 and Internet use by The National Center for Missing and Exploited Children”; Electronic Commerce—Internet / Web/ Online Service Information; Nov. 1997. |
Resonate, Inc.; “Frequently Asked Questions about Resonate Global Dispatch”; Resonate, Inc.; 1997. |
FreeBSD Hypertext Man Pages; Mar. 1, 1997. |
Mike Muuss; “Using the InterNet Control Message Protocol (ICMP) ECHO facility, measure round-trip delays and packet loss across networks paths”; U.S. Army Ballistic Research Laboratory; Dec. 1983. |
Emirical Tools & Technologies; “Dr. Watson (DWTNDA)”; The Network Detective's Assistant v1.2; Nov. 15, 1997. |
Russell Mosemann; “Package Net”; PING; 1996. |
Traceping; May 14, 1997. |
Traceroute—Print the Route Packets take to Network Host; Apr. 22, 1997. |
Command-Line Commands Summary; Microsoft; 1995. |
Uriel Maimom; “TCP Port Stealth Scanning”; Phrack 49. vol. 7 Issue 49. File 15 of 16; entered in case Apr. 19, 2005. |
Internet Protocol: Error and Control Messages (ICMP); entered in case Apr. 19, 2005. |
Internet Engineering Task Force. QoS Routing Mechanisms and OSPF Extensions; Nov. 5, 1996. |
Kimberly C. Claffy et al; “Measurement Considerations for Assessing Unidirectional Latencies”; Computer Systems Laboratory; San Diego Supercomputer Center; 1993. |
Praveen Akkiraju et al.; “Enabling Enterprise Multihoming with Cisco IOS Network Address Translation (NAT)”; Cisco Systems Inc., Online!; 1997. |
Yamamoto K.; “Multi-homed Sites I the Ipv6 Environment”; IPNG Working Group, Online!; Jan. 13, 1999. |
Greenfield D.; “Radware LinkProof”; Networkmagazine.com, Online! Dec. 1999. |
Mark E. Crovella et al.; “Dynamic Server Selection in the Internet”; In Proceedings of the Third IEEE Workshop on the Architecture and Implementation of High Performance Communication Subsytems (HPCS '95); Jun. 30, 1995. |
Peter Sanders; “A Detailed Analysis of Random Polling Dynamic Load Balancing”; 1994 International Symposium on Parallel Architectures, Algorithms and Networks, pp. 382-389; 1994 |
Azer Bestavros; “Speculative Data Dissemination and Service to Reduce Server Load, Network Traffic, and Service Time in Distributed Information System”; IEEE 1996, pp. 180-187; 1996. |
Gil-Haeng Lee; “Using System State Information for Adaptive State Polling Policy in Distributed Load Balancing”; 1997 IEEE, pp. 166-173; 1997. |
Bruce Hajek; “Performance of Global Load Balancing by Local Adjustment”; Transactions on Information Theory, vol. 36, Issue 6: pp. 1398-1414; Nov. 1990. |
Phillip Krueger et al.; “Adaptive Location Policies for Global Scheduling”; IEEE Transactions on Software Engineering, vol. 20, Issue 6: pp. 432-444; Jun. 1994. |
Gil Haeng Lee et al.; “A Sender-Initiated Adaptive Load Balancing Scheme Based on Predictable State Knowledge”; IEICE Transactions on Information and Systems, E79-D:3; pp. 209-221; Mar. 1996. |
E. Haddad; “Optimal Load Sharing in Dynamically Heterogeneous Systems”; Proceedings Seventh IEEE Symposium on Parallel and Distributed Processing (Cat. No. 95TB8183): pp. 346-353; 1995. |
Gil-Haeng Lee et al.; “An Adaptive Load Balacing Algorithm Using Simple Prediction Mechanism”; Proceedings Ninth International Workshop on Database and Expert Systems Applicatins pp. 496-450; 1998. |
B. Bing and R. Subramanian; Abstract of: “An Efficient Multiaccess Technique with Stability Control for Multimedia Personal Communication Networks”; Journal of the Institute of Engineers Singapore; vol. 38, Issue 1:pp. 17-25; 1998. |
R.S. Engelschall; “Balancing Your Website, Practical Approaches for Distributing HTTP Traffic”; WEB Techniques, vol. 3, Issue 5: pp. 45-46, 48-50, 52; 1998. |
Chin Lu and Sau Ming Lau; Abstract of “A Performance Study on Different Classes of Load Balancing Algorithms”; Proceedings of the Fourteenth International Conference Applied Informatics; pp. 39-42; 1996. |
Gil-Haeng Lee et al.: “A Prediction-Based Adaptive Location Policy for Distributed Load Balancing”; Journal of Systems Architecture, vol. 42, Issue 1: pp. 1-18; Aug. 1996. |
Gil-Haeng Lee et al.; Abstract of “A Sender-Initiated Adaptive Load Balancing Scheme Based on Predictable State Knowledge”; IEICE Transactions on Information and Systems; vol. E79-D, Issue 3: pp. 209-221; Mar. 1996. |
K. Benmohammed-Mahieddine et al.; “A Periodic Symmetrically-Initiated Load Balancing Algorithm for Distributed Systems”; Proceedings of the 14th International Conference on Distributed Computing Systems (Cat. No. 94CH3450-4): pp. 616-623; 1994. |
G. Goldszmidt and G. Hunt; “NetDispatcher: A TCP Connection Router”; IBM Research Report, pp. 1-20; RC20853; May 19, 1997. |
Hopscotch (TM), “Genuity's Industrial Strength Solution to Net Traffic”; entered in case Apr. 19, 2005. |
Rodney Joffe and Henry Heflich; Hopscotch (TM) Optimized Web Access; Mar. 11, 1997. |
Aly E. El-Abd et al., “A Neural Network Approach for Dynamic Load Balancing in Homogeneous Distributed Systems”; IEEE 1997; pp. 628-629; 1997. |
Pritchard; “Load Balanced Deadlock-Free Deterministic Routing of Arbitrary Networks”; ACM 1992; pp. 225-234; 1992. |
Shaikh et al.; “Load-Sensitive Routing of Long-Lived IP Flows”; ACM 1999; pp. 215-226; 1999. |
Franco et al.; “A New Method to Make Communication Latency Uniform: Distributed Routing Balancing”; ACM 1999; pp. 210-219; 1999. |
Crochat et al.; “A Path Selection Method in ATM Using Pre-Computation”; Proceedings of IZS; Feb. 1996; pp. 1-12. |
“Linkproof—The First ISP Load Balancing”; Oct. 10, 1999. |
Bates et al.; “Scalable Support for Multi-Homed Multi-Provider Connectivity”; Cisco Systems; Jan. 1998; pp. 1-12. |
Bates et al.; “Scalable Support for Multi-Homed Multi-Provider Connectivity”; Cisco Systems; Dec. 1997; pp. 1-11. |
Berkowitz; “To be Multihomed: Requirements & Definitions Draft—Berkowitz—Multirqmt-01.txt”; Mar. 1998; pp. 1-16, Internet Draft. |
Akkiraju et al.; “A Multihoming Solution Using Nats”; Cisco Systems, Nov. 1998; pp. 1-36. |
Akkiraju et al.; “Enabling Enterprise Multihoming With Cisco IOS Network Address Translation” (NAT); Cisco Systems, 1997; pp. 1-25. |
Sullivan; “Load Sharing Using IP Address Translations” (LSNAT); Aug. 1998; pp. 1-8. |
Brochure: “CSD A Complete Solution for Enhancing Cache Server Farm Performance”; RadWare Ltd., 1997. |
Brochure; “WSD-DS”; RadWare Ltd., 1997. |
Brochure; “WSD-NP a Powerful Global Load Balancing Solution”; RadWare Ltd., 1997. |
Brochure; “WSD, WSD-Pro”; RadWare Ltd., 1997. |
B. Gengler; “RND Server Eases Web Site Congestion”; Internetwork, 7(11); 1996. |
R. J. Kohlepp; “Web Server Director Keeps Intersections Clear”; Network Computing, 1997. |
A. Rogers; “Easing Web Jam With Distributed Servers”; Communications Week, No. 632; 1996. |
J. Taschek; “A Well-Balanced Web”; Internet Computing, 1998. |
P. Boyle; “Web Site Traffic Cops”; PC Magazine, 1997. |
U.S. Appl. No. 60/032,484; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/042,235; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/043,502; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/043,503; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/043,515; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/043,524; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/043,586; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/043,621; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/043,691; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/054,687; entered in case on Aug. 9, 2012. |
U.S. Appl. No. 60/084,278; entered in case on Aug. 9, 2012. |
Number | Date | Country | |
---|---|---|---|
20120303784 A1 | Nov 2012 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09467763 | Dec 1999 | US |
Child | 10449016 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10449016 | Jun 2003 | US |
Child | 13566171 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 09115643 | Jul 1998 | US |
Child | 09467763 | US |