This application relates generally to managing traffic across a computer network, and, more specifically, to techniques for distributing client requests among servers on a network.
The Hyper Text Transfer Protocol (HTTP) is an application level protocol for transferring resources across the Internet, e.g., a network data object or server, where the resource is specified by a Uniform Resource Locator (URL). The Hyper Text Mark-up Language (HTML) is a simple data format that is used to create hypertext documents that are supported by the HTTP protocol. The HTTP protocol is described in “Hypertext Transport Protocol (HTTP) version 1.1, RFC 2616 (June 1999),” available through the Internet Engineering Task Force's (IETF's) website. These standards are commonly relied upon in technology of the World Wide Web (WWW) on the Internet. Other application level Internet protocols include File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), Lightweight Directory Access Protocol (LDAP), TELNET, Internet Relay Chat (IRC), Internet Message Access Protocol (IMAP), Post Office Protocol (POP), and streaming media.
A traffic manager is a network device that manages network traffic. Some traffic managers manage the flow of network traffic between network devices. Some traffic managers control the flow of data packets to and from an array of application servers. A traffic manager can manage and distributes Internet, intranet or other user requests across redundant arrays of network servers, regardless of the platform type. Traffic managers can support a wide variety of network applications such as web browsing, e-mail, telephony, streaming multimedia and other network protocol traffic. The BIG-IP® family of traffic managers, by F5 Networks of Seattle, Wash., are examples of traffic managers.
In the following detailed description of exemplary embodiments of the invention, reference is made to the accompanied drawings, which form a part hereof, and which are shown by way of illustration, specific exemplary embodiments of which the invention may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the invention, and it is to be understood that other embodiments can be utilized, and other changes can be made, without departing from the spirit or scope of the present invention. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present invention is defined by the appended claims.
Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise.
The phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment, though it may.
As used herein, the term “or” is an inclusive “or” operator, and is equivalent to the term “and/or”, unless the context clearly dictates otherwise.
The term “packet” refers to an arbitrary or selectable amount of data that may be represented by a sequence of one or more bits, and is transferred between devices across a network.
Generally, the phrase “computer-readable media” includes any media that can be accessed by a computing device. Computer-readable media may include computer storage media, communication media, or any combination thereof.
The phrase “communication media” typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media. By way of example, communication media includes wired media such as twisted pair, coaxial cable, fiber optics, wave guides, and other wired media and wireless media such as acoustic, RF, infrared, and other wireless media.
The term “modulated data signal” means a signal that has one or more of its characteristics set or changed in such a manner as to encode information in the signal.
The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise.
Clients 102-104 are computing devices capable of connecting with network 106. The set of such devices can include devices that connect using one or more wired communications mediums, a wireless communications medium, or a combination of wired and wireless communications mediums. Clients 102-104 include such devices as personal computers, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, cell phones, smart phones, pagers, PDAs, Pocket PCs, wearable computers, walkie talkies, radio frequency (RF) devices, infrared (IR) devices, CBs, integrated devices combining one or more of the preceding devices, and the like.
Servers 112-116 are computing devices that provide information and/or services to clients 102-104. Servers 112-116 can, for example, store web pages or components thereof, dynamically create web pages or components thereof, store data and/or files for access by other servers or clients, provide services to clients or other network devices, or any combination of these functions.
In one embodiment, clients 102-104 are sites where a human user operates the computing device to make requests for data or services from other computers on the network. Often, the requested data resides in computing devices such as servers 112-116. In this specification, the term “client” refers to a computer's general role as a requester of data or services, and the term “server” refers to a computer's role as a provider of data or services. In general, it is possible that a computer can act as a client, requesting data or services in one transaction and act as a server, providing data or services in another transaction, thus changing its role from client to server or vice versa. In one embodiment, clients 102-104 are computing devices that are not operated by a human user.
In one embodiment, traffic manager 110 manages traffic to an array of traffic managers, each of the array of traffic manager managing traffic to servers or other network devices. In this embodiment, each of the array of traffic managers can be considered a target for packets arriving at the traffic manager 110 from clients 102-104.
Traffic manager 110 receives packets from network 106, through the router 108, and also receives packets from the servers 112-116. In some operations, traffic manager 110 acts like a layer 7 switch. That is, it may look at content associated with higher layers of the packet, e.g. a request for an HTML page, the request including a Uniform Resource Locator (URL) and information that identifies the user, such as a cookie, etc. It can store information in memory so that the next time the requestor requests more information from the same web site each request is sent to the same server. A traffic manager 110 can do this, in part, to ensure that the user is connected to the server that the user previously connected to. This helps prevent the loss of transaction data, such as items in an electronic shopping cart.
As illustrated in
The mass memory generally includes random access memory (“RAM”) 206, read-only memory (“ROM”) 214, and one or more permanent mass storage devices, such as hard disk drive 208. The mass memory stores operating system 216 for controlling the operation of network device 200. The operating system 216 may comprise an operating system such as UNIX®, LINUX®, or Windows®.
In one embodiment, the mass memory stores program code and data for implementing a load balancer 218, and program code and data for implementing a persistence engine 220. The mass memory can also store additional program code 224 and data for performing the functions of network device 200. The mass memory can further include one or more user programs 226 for controlling the network device 200. In particular, in accordance with one embodiment of the present invention, the user program 226 interacts with, provides data to, receives data from, and controls the load balancer 218 and the persistence engine 220, as described in further detail below.
In one embodiment, the network device 200 includes one or more Application Specific Integrated Circuit'(ASIC) chips 230 connected to the bus 204. The ASIC chip 230 includes logic that performs some of the functions of network device 200. For example, in one embodiment, the ASIC chip 230 performs a number of packet processing functions, to process incoming packets. In one embodiment, the ASIC chip 230 includes the persistence engine 220, or a portion thereof. In one embodiment, the network device 200 includes one or more field-programmable gate arrays (FPGA) (not shown), instead of, or in addition to, the ASIC chip 230. A number of functions of the network device can be performed by the ASIC chip 230, by an FPGA, by the CPU 202 with the logic of program code stored in mass memory, or by any combination of the ASIC chip, the FPGA, and the CPU.
In one embodiment, the network device 200 includes an SSL proxy (not shown) that performs cryptographic operations. In one embodiment, the SSL proxy is a separate network device from the traffic manager. In one embodiment, a separate network device performs the functions of the persistence engine 220.
Computer storage media can include volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules or other data. Examples of computer storage media include RAM 206, ROM 214, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium that can store the information and that can be accessed by a computing device.
Network device 200 can also include an input/output interface (not shown) for communicating with external devices or users.
Network device 200 can be implemented as one or more “blades” where the term “blade” refers to one of multiple electronic circuit boards or cards that are installed in a hardware chassis with a backplane. An exemplary blade can include one or more processors, volatile and non-volatile memory, interfaces suitable for communicating information to and from the blade, and other components for enabling the operation of one or more applications. A blade can also include a specialized interface for the backplane and other interfaces, such as a USB port, FIREWIRE port, serial port, RF interface, IR interface, Ethernet interface, IDE controller, and the like. An application running on a blade can employ any of these interfaces to communicate information to other applications running on other blades and/or devices coupled to the blade server. Network device 200 can also be implemented as a combination of blades and additional components in chassis. In one embodiment, servers 112-116, of
The traffic manager 110 shown in
In one example of the invention, the traffic manager is a load-balancing traffic manager. In this example, the traffic manager includes load-balancing and control logic that can be implemented in software, hardware, or a combination of software and hardware.
The BIG-IP® Traffic Manager, by F5 Networks of Seattle, Wash., is a traffic manager that can be used to perform embodiments of the invention. Various functions of the BIG-IP Traffic Manager are disclosed in the BIG-IP Reference Guide, version 4.5.
In one embodiment, the traffic manager 110 intelligently distributes web site connections across the server array. The traffic manager 110 can manage connections to one or multiple Internet or intranet sites, and it can support a wide variety of Internet protocols and services such as TCP/IP (transmission control protocol/Internet protocol) and HTTP. Additionally, the traffic manager 110 can support any one or more of a suite of communications protocols that are used to connect nodes on the Internet, including HTTP, file transfer (FTP), secure sockets layer (SSL), streaming media, DNS, UDP/IP, and email (SMTP).
Referring to
If a client 102-104 connects to one server 112-116 when requesting information, it may be desirable for the client to reconnect to that same server in a second request. This is desirable, for example, during an ecommerce transaction, when a server has stored data associated with the customer, and the data is not dynamically shared with the other servers. This is often the case when a customer uses a “shopping cart” at a web site server. During a second request, if the client is connected, by the traffic manager to a second server, the second server might not have the information about the client and his shopping cart. In this situation, it is desirable that the traffic manager, bypasses the load-balancing algorithm during the second request and directs the client to the prior server. This procedure is referred to as persistence.
In order to maintain persistence, a traffic manager uses a persistence key to associate incoming packets with corresponding target servers. In one embodiment, a traffic manager generates a persistence key based on data associated with incoming packets. A traffic manager may maintain a persistence table, which stores mappings between persistence keys and corresponding servers. After generating a persistence key, the traffic manager may look up the persistence key in the persistence table, in order to determine the corresponding target server.
In one embodiment of the invention, the traffic manager can be configured to selectively perform a persistence operation, or to not perform persistence operations. When configured to not perform persistence operations, the load balancer selects targets based on a load balancing algorithm.
At a block 306, the process 302 makes a determination of whether to perform a load balancing operation with the incoming packets, or to perform a persistence operation. This determination is made based on the persistence configuration as described above. It is to be noted that when a determination to perform a persistence operation is made at the block 306, a subsequent determination, such as at block 310, might result in a load balancing operation being performed instead of persistence. If the process determines that it is to perform a persistence operation, the flow proceeds to a block 308. At the block 308, the process 302 generates a persistence key, using data within the received packets. The generation of a persistence key is discussed in more detail below. In some situations, the generation of a persistence key may fail to produce a key. At a block 310, the process determines whether a persistence key was successfully generated at the block 308. If a persistence key was generated, the flow proceeds to the block 312, where a lookup of the persistence key is performed on a persistence table. It is to be noted that the persistence table can include any of a number of data structures that map persistence keys to corresponding values that indicate a server. This can include a hash table or other type of database. The lookup can include determining whether a valid key/value exists for the generated persistence key.
In one embodiment, the process of looking up a key in the persistence table includes referring to a timestamp associated with the key, and determining whether a key/value mapping remains valid. If the mapping has timed out, it is treated in the same manner as not having a mapping at all, with respect to the process 302. In one embodiment, mappings that time out are removed from the persistence table.
At a decision block 314, a determination is made as to whether a valid mapping corresponding to the persistence key was found at the block 312. If a valid mapping was found, then flow proceeds to the block 316. At the block 316, the received packets are forwarded to a target corresponding to the mapping, based on the data in the mapping. In one embodiment, the data in the mapping is an identifier that directly identifies the target. In one embodiment, the data in the mapping is a value that can be used to determine the target. As used herein, the term “target” can refer to one of a number of types of targets. For example, target can refer to a target server, such as one of target servers 112-116 in
If, at the decision block 306, a determination is made to perform load balancing, flow proceeds to the block 318. At the block 318, a target server is selected for the received packets based on a load balancing algorithm.
If, at the decision block 310, a determination is made that a persistence key was not found, flow proceeds to the block 318, and a load balancing operation is performed. At the block 316, the packet is forwarded to a target, based on the load balancing operation. In one embodiment, in response to receiving a packet, a target can generate a persistence key or a portion of a persistence key and transmit the key, or key portion, to the client. In a subsequent communication, the client then transmits the persistence key, or key portion, for use in maintaining persistence. A traffic manager can also generate a persistence key or a key portion and transmit it to a client. In one embodiment, a target selectively determines whether to transmit a persistence key, or key portion, to a client. The target can determine, for example, that persistence is not needed for some transactions, and that for other transactions, persistence is desired.
If, at the decision block 314, a determination is made that a valid mapping for the persistence key was not found, flow proceeds to a block 320, and a target is selected by using a load balancing algorithm. Additionally, if a valid mapping for the persistence key was not found at the decision block 314, at a block 322, a mapping of the key and the selected target is added to the persistence table. This allows the mapping to be found the next time the key is used. Flow then proceeds to the block 316, where the packet or packets are forwarded to the target.
It is to be noted that, in some embodiments, the actions of the process 302 may occur in different sequences, additional actions may be performed, or not all of the illustrated actions may be performed. For example, in one embodiment, after a target server to persist to is determined, a traffic manager receives additional packets from the client device prior to forwarding any of the client's packets to the target server. In one embodiment, one or more packets that are used for a persistence determination might not be forwarded to a target server, and the target server determination is used for additional packets from the same client. In one embodiment, data in the received packets is collected and modified, and forwarded to a target server in one or more packets. In one embodiment, the actions of blocks 308, 310, 312, and 314 are performed by a persistence engine, such as persistence engine 220 of
Referring again to
The user program can include conditional instructions and flow control instructions that allow the user program to perform actions based on specified conditions.
The following table illustrates a set of user instructions that are available to a user program in one embodiment of the invention. These instructions can be used to process layer 7 data in a buffered TCP stream, even if it is not part of an HTTP connection.
In the exemplary user program code of
In one embodiment, a udp_content variable allows a user program to create a basic expression that load balances or persists on traffic based on arbitrary data within a UDP connection.
Some of the variables represent data that is available within an HTTP request string. The latter are layer 7 data, and may require the buffering of multiple packets in order to identify the data. The following table illustrates a set of symbolic variables that are available to a user program and represent such HTTP data.
In the exemplary user program code of
In the exemplary user program code of
At a block 508, one or more of the user instructions are processed. The user instructions can direct the operation of the core modules in a number of ways. This is discussed in further detail below. At a decision block 510, a determination is made as to whether a user instruction to accumulate additional data has been processed. An instruction to accumulate additional data instructs the traffic manager to buffer the packet data, and to process at least one more packet from the client, if available.
If, at the decision block 510, it is determined that an accumulate instruction has not been processed, flow proceeds to the decision block 514. At the decision block 514, a determination is made as to whether additional data is needed, from additional packets. This decision is based on operations and user instructions other than an explicit accumulate instruction. For example, a user instruction might request a specific data item. If the data item has not yet been found, the persistence engine can automatically require additional data to be accumulated. Thus, the user instruction serves as an implicit instruction to accumulate additional data if the data is not yet available.
If the decision block 514 determines that additional data is needed, flow proceeds to the block 504, where a new packet is received. If no more data is needed at the decision block 514, the process 502 proceeds to the decision block 516, where a determination is made as to whether a persistence key, in accordance with the user program, has been found. If a persistence key has been found, at a block 518, the persistence key is provided. If a persistence key has not been found, then at a block 520 an indication that a key is not found is provided. Note that the decision block 516 corresponds to the decision block 310 in
Buffering the received data, as represented by the block 505, might include extracting some or all of the data from the packets, processing it as required, and copying the data to a buffer, where it is saved at least until the processing is complete. The amount of data in the buffer increases as each new packet is received. Buffering can include logical buffering, where the data is not physically copied to a single buffer, and pointers or other values are maintained to allow inspection and processing of the collected packet data. In this manner, the buffer is a logical buffer, rather than a physical buffer. As used herein, the terms buffer and buffering include both logical and physical buffers.
Processing of the data can include one or more of several types of processing. The processing might include decrypting data that has previously been encrypted. For example, in a secure communication, such as SSL communication, the client might encrypt packets prior to transmitting them, and the traffic manager decrypts the packets after receiving them. The processing might include decompressing data that has previously been compressed. It might include discarding data that is not needed, or selectively extracting portions that are considered to be useful.
Secure communication protocols typically include a handshake protocol that includes one or more messages passed between two devices. In the SSL handshake, for example, Messages are passed that include data such as a device identifier, a digital certificate, and encryption information. In one embodiment of the invention, processing the data includes extracting one or more data items that are received as part of a secure communication protocol handshake, and providing the data items to a user program to create a persistence key. A user program can use such a data item individually, or combine it with other data received either within a secure communication protocol handshake or outside of a secure communication protocol handshake.
Processing the data might also include decoding one or more portions of “chunked” data. Chunked encoding refers to formatting fragments of a message in discrete, pieces, each piece accompanied by an indication of the size of the piece. Chunked encoding allows dynamically produced content to be transferred along with the information necessary for the recipient to verify that it has received the full message. The HTTP 1.1 protocol, cited above, describes one way to implement chunked encoding. In the HTTP 1.1 protocol implementation, a hexadecimal size field is prepended to the chunked data. Alternative implementations of chunked encoding are also possible, and corresponding techniques for decoding can be used with the invention.
The process of decoding chunked data includes reformatting multiple fragments of a chunked message in order to reform the original message. In one embodiment of the present invention, the amount of data that is decoded can be controlled by the user program instructions.
In one embodiment, processing the data also includes recognizing “lines” in the data. A line can be recognized by a delimiter comprising a carriage return, a linefeed, or both. A line can also be identified by the use of a table that includes an offset for the beginning of each line. This allows a user program to extract data based on its position within a line. For example, a user program can control a test as to whether a specified string occurs at the beginning of a line or at the end of a line. By identifying the beginning or ending of lines within the content, one embodiment of the present invention can perform such tests relative to a line. In one embodiment, a user program can extract data based on a position of a line containing the data within a message. For example, a user program might specify that a data item is to be extracted before or after a specified number of lines from the message beginning, before or after a specified number of lines from the message end, or before or after a specified number of lines following an identified position. In another example, a user program that controls buffering can specify a number of additional lines of content to accumulate, or a minimum or maximum number of lines to accumulate. In yet another example, a user program can identify a line within the data, and specify that a string is to be found in the line that immediately follows the identified line, or within a specified number of following lines. As used herein, the term message can refer to a single packet or to multiple packets of a communication.
In one embodiment, processing the data includes recognizing components within a markup language, such as the Extensible Markup Language (XML), Standard Generalized Markup Language (SGML), and similar languages. For example, the exemplary user program code of
The process 502 of generating a persistence key can include extracting multiple data items and combining them to generate a persistence key. Some data items are extracted from single packets. For example, a data item of a source IP address or a data item of server port is found in the TCP header of a single packet. Some data items are found in the buffered data, either because the data spans more than one packet, or because the data is not confined by a protocol to a specific packet. Items found in buffered data are items that are found at layers 6-7 of the OSI model. One example of this is a string found in an HTTP header. The particular string might be contained within a single packet, or might span two or more packets. In either case, the string is considered to be buffered data. In another example, an array of binary data within a TCP or UDP stream might be contained within a single packet, or might span two or more packets. Logically, however, it is considered to be part of the buffered data, because it is not structurally confined to a specific packet. The process of decoding chunked data can be performed as part of the process of buffering data.
This method of extracting and processing data allows an embodiment of the present invention to generate persistence keys from a combination of data items that include data items from both buffered data and single-packet data. As used herein, the term single-packet data does not require that the single-packet data is not buffered. The term describes data that is received within a single packet, and is known by the receiver to be within a single packet For example, a user program can specify that a source IP address and an HTTP header are to be combined to form a persistence key. The source IP address—a single-packet data item—and the HTTP header—a buffered data item—are combined to form the persistence key. Combinations of two or more data items from buffered and single-packet data can be generated in this manner. Typically, single-packet data is data that is found at layers 2 and 3 of the OSI model.
The method of extracting and processing data also allows alternative data items from single-packet and buffered data to be used. For example, a user program can specify that an HTTP header value (a buffered item) be used as a persistence key if it is found, and that if it is not found, the source IP address is to be used. This allows a user program to process and manipulate both buffered and single-packet data, with the details of the buffering handled by the core modules.
In one embodiment, some aspects of the invention are employed to provide persistence for two or more clients. In this embodiment, two or more clients, such as clients 102-104 (
In one embodiment, the traffic manager or a server generates an identifier and transmits it to each of the clients. Each of the clients then includes the identifier in its packets, and the traffic manager extracts this identifier as a persistence key. HTTP cookies, HTTP headers, or XML data can be employed to pass the identifier from each of the clients. As an alternative, one or more of the clients can generate the identifier and transmit this identifier to each of the other clients in the communication.
When extracting persistence keys from packets received from multiple clients, in one embodiment, the persistence keys are equivalent. The traffic manager might, for example, look up a mapping between the persistence key and the target in order to identify the target. Referring to the actions of
In one embodiment having two or more clients persisting to the same target, persistence keys extracted from each client may differ, and additional actions are taken to identify the target.
An alternative way to implement persistence in the above configuration is as follows. When a client inserts its identifier and an identifier of a second client in a message, the client first sorts the identifiers in a specified order, for example, numeric order, or alphabetical order. In this way, the extracted persistence keys for each pair of clients will be identical, regardless of whether the message arrived from the first or the second client. This process can also be expanded to configurations in which more than two clients desire to persist to a target.
Yet another alternative is to have a user instruction to sort two extracted fields in a specified order when creating a persistence key. In this way, the traffic manager does the work of creating a persistence key that is identical regardless of which client it came from. In
At boxes 714 or 716, a packet or packets are received from either the first client or a second client, such as Client B 604 of
As discussed above, but not illustrated in
In another application employing some aspects of the invention, a client communicates though a traffic manager to a server using a tunnelling protocol. SOCKS is one such tunnelling protocol, in which TCP is tunnelled over TCP/IP. The client transmits packets that contain a TCP header, a SOCKS header, and data. As discussed above, the traffic manager can load balance a first packet from the client using a load balancing algorithm. When subsequent packets are received from the client, the load balancer extracts a persistence key from the packet. A persistence key can be extracted, for example, from one or more fields within the SOCKS header, or from any combination of fields in the SOCKS header, the TCP header, and the data. This allows the traffic manager to load balance and maintain persistence with packet traffic within a tunnelling protocol.
The above specification, examples and data provide a complete description of the manufacture and use of the composition of the invention. Since many embodiments of the invention can be made without departing from the spirit or scope of the invention, the invention resides in the claims hereinafter appended.
This application is a Continuation of U.S. patent application Ser. No. 13/300,398 entitled “Method And System For Managing Network Traffic,” filed Nov. 18, 2011, now U.S. Pat. No. 8,176,164, which is a Continuation of U.S. patent application Ser. No. 12/837,420 filed on Jul. 15, 2010, now U.S. Pat. No. 8,150,957, which in turn is a Divisional of U.S. patent application Ser. No. 10/385,790 entitled “Method And System For Managing Network Traffic,” filed on Mar. 10, 2003, now U.S. Pat. No. 7,774,484, the benefit of which are claimed under 35 U.S.C. §120, and further of a U.S. Provisional Patent Application Ser. No. 60/435,550, filed on Dec. 19, 2002, the benefit of Which is claimed under 35 U.S.C. §119.(e), and which are each incorporated herein by reference.
Number | Name | Date | Kind |
---|---|---|---|
5553242 | Russell et al. | Sep 1996 | A |
5774660 | Brendel et al. | Jun 1998 | A |
5835724 | Smith | Nov 1998 | A |
5941988 | Bhagwat et al. | Aug 1999 | A |
6104716 | Crichton et al. | Aug 2000 | A |
6128657 | Okanoya et al. | Oct 2000 | A |
6175867 | Taghadoss | Jan 2001 | B1 |
6182122 | Berstis | Jan 2001 | B1 |
6226684 | Sung et al. | May 2001 | B1 |
6253226 | Chidambaran et al. | Jun 2001 | B1 |
6298380 | Coile et al. | Oct 2001 | B1 |
6304908 | Kalajan | Oct 2001 | B1 |
6360262 | Guenthner et al. | Mar 2002 | B1 |
6370584 | Bestavros et al. | Apr 2002 | B1 |
6381638 | Mahler et al. | Apr 2002 | B1 |
6411986 | Susai et al. | Jun 2002 | B1 |
6650640 | Muller et al. | Nov 2003 | B1 |
6654701 | Hatley | Nov 2003 | B2 |
6697363 | Carr | Feb 2004 | B1 |
6754662 | Li | Jun 2004 | B1 |
6766373 | Beadle et al. | Jul 2004 | B1 |
6829238 | Tokuyo et al. | Dec 2004 | B2 |
6928082 | Liu et al. | Aug 2005 | B2 |
6950434 | Viswanath et al. | Sep 2005 | B1 |
6954780 | Susai et al. | Oct 2005 | B2 |
6957272 | Tallegas et al. | Oct 2005 | B2 |
RE38902 | Srisuresh et al. | Nov 2005 | E |
6963982 | Brustoloni et al. | Nov 2005 | B1 |
6978334 | Hiratsuka | Dec 2005 | B2 |
7007092 | Peiffer | Feb 2006 | B2 |
7103045 | Lavigne et al. | Sep 2006 | B2 |
7136385 | Damon et al. | Nov 2006 | B2 |
7139792 | Mishra et al. | Nov 2006 | B1 |
7146417 | Coile et al. | Dec 2006 | B1 |
7161947 | Desai | Jan 2007 | B1 |
7215637 | Ferguson et al. | May 2007 | B1 |
7225237 | Tenereillo | May 2007 | B1 |
7231446 | Peiffer et al. | Jun 2007 | B2 |
7254639 | Siegel et al. | Aug 2007 | B1 |
7277924 | Wichmann et al. | Oct 2007 | B1 |
7321926 | Zhang et al. | Jan 2008 | B1 |
7366781 | Abjanic | Apr 2008 | B2 |
7376731 | Khan et al. | May 2008 | B2 |
7657618 | Rothstein et al. | Feb 2010 | B1 |
7720980 | Hankins et al. | May 2010 | B1 |
7774484 | Masters et al. | Aug 2010 | B1 |
8009566 | Zuk et al. | Aug 2011 | B2 |
20010023442 | Masters | Sep 2001 | A1 |
20020025036 | Sato | Feb 2002 | A1 |
20020055980 | Goddard | May 2002 | A1 |
20020055983 | Goddard | May 2002 | A1 |
20020059428 | Susai et al. | May 2002 | A1 |
20020078174 | Sim et al. | Jun 2002 | A1 |
20020105931 | Heinonen et al. | Aug 2002 | A1 |
20020112071 | Kim | Aug 2002 | A1 |
20020120743 | Shabtay et al. | Aug 2002 | A1 |
20020138627 | Frantzen et al. | Sep 2002 | A1 |
20020138739 | Scheetz et al. | Sep 2002 | A1 |
20030091025 | Celi et al. | May 2003 | A1 |
20030145077 | Khan et al. | Jul 2003 | A1 |
20030208600 | Cousins | Nov 2003 | A1 |
20040205597 | Abjanic | Oct 2004 | A1 |
20040225810 | Hiratsuka | Nov 2004 | A1 |
20050138243 | Tierney et al. | Jun 2005 | A1 |
20080025336 | Cho et al. | Jan 2008 | A1 |
20080253366 | Zuk et al. | Oct 2008 | A1 |
20100121953 | Friedman et al. | May 2010 | A1 |
Entry |
---|
“BIG-IP® Controller With Exclusive OneConnect™ Content Switching Feature Provides a Breakthrough System for maximizing server ad Network Performance”, F5 Networks, Inc., Press Release, May 8, 2001; 3 pages. |
Fielding, R. et al, “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group; RFC 2068, Jan. 1997, 152 pages. |
Fielding, R. et al. “Hypertext Transfer Protocol—HTTP/1.1” Network Working Group, RFC 2616. Jun. 1999, 165 pages. |
Hochmuth, Phil, “F5, CacheFlow Pump Up Content-Delivery Lines”, NetworkWorld, May 4, 2001, 3 pages http//www.network.com/news/2001/0507cachingonline.html accessed Jun. 1, 2005. |
Hewitt, John R. et al., “Securities Practice and Electronic Technology,” Corporate Securities Series (New York: Law Journal Seminars-Press) 1998, title page, bibliography page, pp. 4.29-4.30. |
Acharya et al., “Scalable Web Request Routing with MPLS”, IBM Research Report, IBM Research Division, Dec. 5, 2001. |
Reardon, Marguerite, “A Smarter Session Switch: Arrowpoint's CS Session Switches Boast The Brains Needed for E-Commerce,” Data Communications, Jan. 1999, title page, pp. 3, 5, 18. |
Snoeren, A.C., et al. Fine-Grained Failover Using Connection Migration, in Third Annual USENIX Symposium on Internet Technologies and Systems, 2001 http://walfredo.dsc.ufcg.edu.br/cursos/apdist20011/Snoeren01.pdf on Jun. 18, 2010. |
Mogul, Jeffrey C. “The case for persistent-Connection HTTP”, Proceedings of the conference on Applications, technologies, architectures, and protocols for computer communication, p. 299-313, Aug. 28-Sep. 1, 1995, Cambridge, Massachusetts, United States. |
Yeom, H. Y. et al., “IP Multiplexing by Transparent Port-Address Translator,” Proceedings of the Tenth USENIX System Administration Conference, Chicago, IL, USA, 1996, pp. 1-16. |
Official Communication for U.S. Appl. No. 10/172,411 mailed Dec. 16, 2005. |
Official Communication for U.S. Appl. No. 10/172,411 mailed Jun. 19, 2006. |
Official Communication for U.S. Appl. No. 10/172,411 mailed Sep. 1, 2006. |
Official Communication for U.S. Appl. No. 10/172,411 mailed Nov. 13, 2006. |
Official Communication for U.S. Appl. No. 10/172,411 mailed Apr. 30, 2007. |
Official Communication for U.S. Appl. No. 10/172,411 mailed Oct. 18, 2007. |
Official Communication for U.S. Appl. No. 10/172,411 mailed Apr. 25, 2008. |
Official Communication for U.S. Appl. No. 10/172,411 mailed Oct. 1, 2008. |
Official Communication for U.S. Appl. No. 10/385,790 mailed Apr. 8, 2008. |
Official Communication for U.S. Appl. No. 10/385,790 mailed Sep. 2, 2008. |
Official Communication for U.S. Appl. No. 10/385,790 mailed Mar. 4, 2009. |
Official Communication for U.S. Appl. No. 10/385,790 mailed Sep. 10, 2009. |
Official Communication for U.S. Appl. No. 10/385,790 mailed Apr. 2, 2010. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Jun. 16, 2006. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Nov. 30, 2006. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Mar. 8, 2007. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Aug. 24, 2007. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Jan. 4, 2008. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Jul. 17, 2008. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Dec. 24, 2008. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Jun. 10, 2009. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Dec. 24, 2009. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Jun. 22, 2010. |
Official Communication for U.S. Appl. No. 10/409,951 mailed Jan. 19, 2011. |
Official Communication for U.S. Appl. No. 12/327,742 mailed Aug. 4, 2010. |
Official Communication for U.S. Appl. No. 12/327,742 mailed Jan. 5, 2011. |
Official Communication for U.S. Appl. No. 12/837,420 mailed Mar. 2, 2011. |
Official Communication for U.S. Appl. No. 12/837,420 mailed Aug. 18, 2011. |
Official Communication for U.S. Appl. No. 13/300,398 mailed Jan. 12, 2012. |
Official Communication for U.S. Appl. No. 12/837,420 mailed Nov. 25, 2011. |
Official Communication for U.S. Appl. No. 12/327,742 mailed Jun. 5, 2012. |
Official Communication in U.S. Appl. No. 12/327,742 mailed Nov. 15, 2012. |
Official Communication for U.S. Appl. No. 12/327,742 mailed May 3, 2013. |
Number | Date | Country | |
---|---|---|---|
60435550 | Dec 2002 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 10385790 | Mar 2003 | US |
Child | 12837420 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13300398 | Nov 2011 | US |
Child | 13445843 | US | |
Parent | 12837420 | Jul 2010 | US |
Child | 13300398 | US |