Protocol-configurable transaction processing

Information

  • Patent Grant
  • 9225479
  • Patent Number
    9,225,479
  • Date Filed
    Thursday, September 13, 2012
    12 years ago
  • Date Issued
    Tuesday, December 29, 2015
    9 years ago
Abstract
A traffic management device or other intermediate network device is configured to enable the device to support connection splitting and/or connection aggregation or to otherwise process network transactions for an arbitrary transaction-oriented protocol. The configuration may be accomplished by providing one or more traffic management rules defined by way of a scripting language and provided to an interpreter. The traffic management rule may follow a basic approach common to many protocols and is adapted to the particular protocol being supported. The rule may configure the network device to inspect incoming data, extract length and record type specifiers, buffer an appropriate amount of data to determine transactions or transaction boundaries, and perform other operations. Transaction processing may be enabled for various kinds of protocols, including application-level, proprietary, quasi-proprietary, and special-purpose protocols, protocols for which limited information is available, and protocols not natively supported by the network device.
Description
FIELD OF THE INVENTION

The present invention relates generally to network communications, and more particularly, but not exclusively, to employing a traffic management device to process network transactions.


BACKGROUND OF THE INVENTION

Network traffic management mechanisms are typically deployed to mediate data communications between remote client devices and one or more server devices. Depending on the features of the protocol governing the communication, various optimizations may be achieved by a traffic management device or another intermediate network device. For example, some protocols support persistent client connections, in which a client can make multiple requests, and receive multiple responses, on the same network connection. Employing multiple requests on the same connection may be economical because it reduces the setup and teardown time associated with the underlying transport protocol. The traffic management device may then allocate the multiple requests to different backend servers for various purposes, such as load balancing. This process is known as connection splitting or connection multiplexing. In some protocols, the responses from the multiple backend servers to the external client connection must be sent back in order by the traffic management device.


A traffic management device may also be employed for connection aggregation, in which requests from multiple clients are allocated to the same backend server connection. Connection aggregation achieves economies on the backend connection and generally enables a set of servers to scalably handle a larger number of client requests.


Many protocols may be susceptible to some form of connection splitting and/or connection aggregation by an intermediate network device managing transactions that conform to the requirements of those protocols. This may enable clients and/or backend servers to be used more efficiently or scalably. However, traffic management devices have generally not been designed to be adaptable by users or administrators of the devices to support such services for arbitrary protocols, including foreign protocols that the device is not pre-configured to natively support, and including proprietary protocols or protocols for which limited information is available to the user.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting and non-exhaustive embodiments of the present invention are described with reference to the following drawings. In the drawings, like reference numerals refer to like parts throughout the various figures unless otherwise specified.


For a better understanding of the present invention, reference will be made to the following detailed description of the invention, which is to be read in association with the accompanying drawings, wherein:



FIG. 1 shows a functional block diagram illustrating one embodiment of an environment for practicing the invention;



FIG. 2 shows one embodiment of a server device that may be included in a system implementing the invention;



FIG. 3 illustrates a logical flow diagram generally showing a high-level view of one embodiment of a process for connection splitting of network transactions;



FIG. 4 illustrates a logical flow diagram generally showing a high-level view of one embodiment of a process for connection aggregation of network transactions;



FIG. 5 is a logical flow diagram generally showing a high-level view of one embodiment of a process for configuring network traffic management to support connection splitting and/or connection aggregation for a particular protocol;



FIG. 6 illustrates a logical flow diagram generally showing one embodiment of a process for providing connection splitting for an arbitrary protocol, with respect to client requests;



FIG. 7 illustrates a logical flow diagram generally showing one embodiment of a process for providing connection splitting for an arbitrary protocol, with respect to server responses; and



FIG. 8 illustrates a logical flow diagram generally showing one embodiment of a process for providing connection aggregation for an arbitrary protocol, in accordance with the invention.





DETAILED DESCRIPTION OF THE INVENTION

The present invention will now be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments by which the invention may be practiced. The invention may, however, be embodied in many different forms, and this specification should not be construed to limit the invention to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete and will convey fully the scope of the invention to those skilled in the art. The present invention may be embodied as methods or as devices, among other embodiments. Accordingly, the invention may take the form of a hardware embodiment, a software embodiment, or an embodiment combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.


Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment, though it may. As used herein, the term “or” is used in an inclusive sense, and is equivalent to the term “and/or”, unless the context clearly dictates otherwise. The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meanings of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.”


The term “network connection” or simply “connection” is directed towards various links, link types, communication interfaces, protocols, or the like, that enable a computing device to communicate with a computing device over a network. One such network connection may be a Transmission Control Protocol (TCP) connection. TCP connections are virtual connections between two network nodes and are typically established through a TCP handshake protocol. The TCP protocol is described in further detail in Request for Comments 793, which is available at http://www.ietforg/rfc/rfc0793.txt?number=793.


Briefly stated, the invention is directed towards enabling a user of an intermediate network device, such as a traffic management device, to configure the device to determine transaction boundaries for network transaction processing, such as connection splitting and/or connection aggregation for an arbitrary transaction-oriented protocol. The configuration may include specifying, in the form of rules defined by way of a scripting language or command language, operations to be performed by the device. The specified rules are evaluated at runtime upon the occurrence of particular triggering network events. In general, the rules, which are adapted to the requirements of a particular protocol, include inspecting incoming data, extracting length and record type specifiers, and buffering an appropriate amount of data to determine transaction boundaries so that transactions may be split and/or aggregated. The configuration provided by the user follows a general approach that reflects elements that are common to many transaction-oriented network protocols (for example, protocol headers containing length specifiers and record type specifiers).


The invention may be employed for such purposes as enabling connection splitting and/or connection aggregation for various protocols, including application-level protocols and including proprietary or quasi-proprietary protocols, protocols for which limited information is available, and foreign protocols. Throughout this specification and the accompanying claims, a protocol is “foreign” with respect to a device if it is not natively supported by the device prior to a configuration of the device by a user. For example, in an embodiment of the invention, rules may be provided for enabling connection splitting for such special-purpose protocols as Internet Inter-ORB Protocol (IIOP), Financial IntereXchange (FIX), and the National Council for Prescription Drug Programs (NCPDP) protocol. The invention thus enables a general-purpose traffic management device to provide connection splitting, connection aggregation, and other services for a particular protocol through configuration of the device by a user, without requiring native support for that protocol to be built into the device beforehand (for example, in the form of a precompiled protocol module). In addition to enabling clients and/or servers to be used more efficiently or in a more scalable manner, embodiments of the invention provide a means for rapidly prototyping connection splitting and/or connection aggregation for a protocol in preparation for a high-performance and maintainable native implementation.


Illustrative Operating Environment



FIG. 1 illustrates one embodiment of an environment in which the invention may operate. However, not all of the depicted components may be required to practice the invention, and variations in the arrangement and type of the components may be made without departing from the spirit or scope of the invention.


As shown, environment 100 includes multiple client devices 102-104, network 105, traffic management device (TMD) 106, and multiple backend servers 108-110. Each of client devices 102-104 is in communication with TMD 106 through network 105. TMD 106 is in further communication with backend servers 108-110. TMD 106 may be in communication with backend servers 108-110 through a network infrastructure, not shown, that is similar to network 105.


Generally, client devices 102-104 may include any computing device capable of connecting to another computing device to send and receive information. As such, client devices 102-104 may range widely in capabilities and features. Client devices 102-104 may include any device that is capable of connecting using a wired or wireless communication medium. The set of such devices may include devices that typically connect by way of a wired communication medium, such as personal computers, workstations, multiprocessor systems, microprocessor-based or programmable consumer electronics, and the like. The set of such devices may also include devices that typically connect using a wireless communication medium, such as cellular or other mobile telephones, personal digital assistants (PDAs), radio frequency (RF) devices, infrared (IR) devices, integrated devices combining one or more of the preceding devices, or another kind of mobile and/or wireless device. Each of client devices 102-104 may be configured to execute a client application or the like to perform various actions, including communicating requests to another device by way of a network interface and in accordance with one or more network protocols.


Network 105 couples each of client devices 102-104 with other network devices, such as TMD 106 or any other network-enabled electronic device. In essence, network 105 includes any communication means by which information may travel between any of client devices 102-104 and TMD 106. In one embodiment, network 105 includes or is a part of the set of interconnected networks that comprise the Internet. Network 105 may include local area networks (LANs), wide area networks (WANs), direct network connections, such as through a Universal Serial Bus (USB) port, or any combination thereof. In an interconnected set of LANs, including those based on differing architectures and protocols, a router may serve as a link between LANs, enabling messages to be sent from one LAN to another.


Communication links within LANs typically include twisted wire pair or coaxial cable, while communication links between networks may employ analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communication links known to those skilled in the art. Furthermore, remote devices may be remotely connected to either LANs or WANs by way of a modem and temporary telephone link.


Network 105 may further employ one or more wireless access technologies including, but not limited to, second, third, or fourth generation (2G, 3G, or 4G) radio access for cellular systems, wireless LAN, wireless router (WR) mesh, and the like. Access technologies such as 2G, 3G, or 4G may enable wide area coverage for network devices, such as client devices 102-104, and the like, with various degrees of mobility. For example, network 105 may enable a radio connection through a radio network access such as Global System for Mobile communications (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Wideband Code Division Multiple Access (WCDMA), and the like. Network 105 may include an infrastructure-oriented wireless network, an ad-hoc wireless network, or another form of wireless network.


TMD 106 may include any of various kinds of devices that manage network traffic. Such devices may include, for example, routers, proxies, firewalls, load balancers, cache devices, devices that perform network address translation, any combination of the preceding devices, and the like. TMD 106 may, for example, control the flow of data packets delivered to and forwarded from an array of servers, such as backend servers 108-110. TMD 106 may direct a request for a resource to a particular server based on network traffic, network topology, capacity of a server, content requested, and various other traffic distribution mechanisms. TMD 106 may receive data packets from and transmit data packets to a device within the Internet, an intranet, or a LAN accessible through another network. TMD 106 may recognize packets that are part of the same communication, transaction, flow, and/or stream and may perform appropriate processing on such packets, such as directing them to the same server so that state information is maintained. TMD 106 may support a wide variety of network applications.


TMD 106 may receive a request from one of client devices 102-104. TMD 106 may select a server from among backend servers 108-110 to which TMD 106 forwards the request. TMD 106 may employ any of a variety of criteria and mechanisms to select the server, including those mentioned above, load balancing mechanisms, and the like. TMD 106 is further configured to receive a response to the request and to forward the response to an appropriate one of client devices 102-104. Moreover, TMD 106 may receive multiple requests in a pipeline from one of client devices 102-104, in which case TMD 106 may forward the requests to one or more selected servers.


TMD 106 may be implemented using one or more general-purpose or special-purpose computing devices of various kinds. Such devices may be implemented solely in hardware or as a combination of hardware and software. For example, such devices may include application-specific integrated circuits (ASICs) coupled to one or more microprocessors. The ASICs may be employed to provide a high-speed switch fabric while the microprocessors may perform higher-layer processing of packets. One embodiment of a network device that may be employed as TMD 106 is network device 200 of FIG. 2, configured with appropriate software.


Backend servers 108-110 may include any computing device capable of data communication with client devices 102-104. Devices that may operate as backend servers 108-110 include personal computers, desktop computers, multiprocessor systems, microprocessor-based or programmable consumer electronics, server machines, and the like. Each of backend servers 108-110 may be configured to perform a different operation or function. Data communication may include communication of data packets, which may be sent to establish a connection, to acknowledge a receipt of data, to transport information such as a request or response, and the like. Packets received by backend servers 108-110 may be formatted in accordance with Transmission Control Protocol/Internet Protocol (TCP/IP), User Datagram Protocol (UDP), or another transport protocol or the like. The packets may be communicated among backend servers 108-110, TMD 106, and client devices 102-104 in accordance with any of various protocols such as protocols within the TCP/IP protocol suite.


In this specification, unless context indicates otherwise, the term “client” refers broadly to a requester of data or services in a particular transaction, and the term “server” refers broadly to a provider of data or services in a particular transaction. In general, a computing device may be configured to function as a client device, as a server device, or as both a client device and a server device. Accordingly, the present invention is applicable to network communication that employs client-server protocols as well as protocols that do not conform to a client-server model.


Illustrative Traffic Management Device



FIG. 2 shows one embodiment of a network device, in accordance with the present invention. Network device 200 may include many more or fewer components than those shown. The components shown, however, are sufficient to disclose an illustrative embodiment for practicing the invention. Network device 200 may represent, for example, TMD 106 of FIG. 1.


Network device 200 includes central processing unit 202, main memory 206, input/output interfaces 208, hard disk 210, and network interface 212, some or all of which communicate by way of bus 204. Hard disk 210 is employed for nonvolatile secondary storage and may also be used along with main memory 206 to implement virtual memory, which may be regarded as part of main memory 206. Main memory 206 typically includes both random-access memory (RAM) and read-only memory (ROM). Input/output interfaces 208 enable communication by central processing unit 202 with input and output devices, such as a display, printer, keyboard, mouse, and storage devices such as an optical disk. Network device 200 communicates over a network, such as network 105 in FIG. 1, by way of network interface 212, which may be configured for use with various network protocols, such as the TCP/IP protocol suite. Network interface 212 may include or be connected to a transceiver, a network interface card (NIC), and the like.


Main memory 206 is one example of a computer storage medium, which in turn is one kind of processor-readable medium. Computer storage media and other processor-readable media may include volatile, nonvolatile, removable, and non-removable media implemented in any technology or by way of any method for storage of information, such as machine-readable instructions, data structures, program modules, or other data. Examples of processor-readable media include RAM, ROM, EEPROM, flash memory or other memory technology, optical storage media, magnetic cassettes, magnetic tape, magnetic disk storage devices or media, or any other medium accessible directly or indirectly by a processor and which may be used to store information volatilely or nonvolatilely. Processor-readable media may further include network communication media that embody or encode data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media.


Main memory 206 stores processor-executable program code and data. In particular, main memory 206 stores operating system 214, which is executed by central processing unit 202 to control the operation of network device 200. A general-purpose or special-purpose operating system may be employed. Additionally, one or more applications 216 of various kinds may be loaded into main memory 206 by way of operating system 214 and executed by central processing unit 202.


Among the applications 216 that may be loaded and run are traffic manager 218 and traffic management rules interpreter 220. Traffic manager 218 is configured to receive a request from a client device and to forward the request to a server selected based on a variety of criteria. For example, traffic manager 218 may select the server based on any of a variety of load balancing mechanisms, including a round trip time, a least connections, a packet completion rate, a quality of service, a topology, a global availability, a hop metric, a hash of an address in a received packet, a static ratio, a dynamic ratio, a source IP address, a destination IP address, a port number, deep-packet inspections including inspections of application-level data, employing historical data and the like, session persistence, and a round robin mechanism. In another embodiment, traffic manager 218 may forward the request based on a type of request. For example, a database request may be forwarded to a predetermined database server, while an email request may be forwarded to a predetermined mail server. Traffic manager 218 is likewise configured to receive responses from servers and to forward the responses to client devices.


Traffic management rules interpreter 220 enables a user of network device 200 to customize the operation of the device by writing traffic management rules, which may be defined by way of commands, scripts, or the like, to configure traffic manager 218 or other aspects or elements of traffic management actions performed by network device 200. For example, a user who operates network device 200 may write a traffic management rule to inspect the header or payload of a packet received from a client device and to direct the packet to a particular backend server based on the results of the inspection. Traffic management rules interpreter 220 interprets a script defining the rule and causes traffic manager 218 to act on network traffic in accordance with the rule.


Main memory 206 may be employed by operating system 214, applications 216, traffic manager 218, traffic management rules interpreter 220, and other programs to store data, including storage of data in buffers 222.


In one embodiment, network device 200 includes at least one ASIC chip (not shown) coupled to bus 204. The ASIC chip may include logic that performs some of the operations of network device 200. For example, the ASIC chip may perform a number of packet processing functions for incoming and/or outgoing packets, and the ASIC chip may perform at least a portion of the logic to enable the operation of traffic manager 218.


Generalized Operation


The operation of certain aspects of the invention will now be described with respect to FIGS. 3-8. Aspects of the illustrated processes may be performed at an intermediate network device such as network device 200, and may be performed by traffic manager 218 as configured by traffic management rules interpreter 220, which performs operations specified in a script provided by a user to adapt connection splitting and/or connection aggregation to a particular protocol. FIG. 3 illustrates a logical flow diagram generally showing a high-level view of one embodiment of a process for connection splitting of network transactions. FIG. 4 illustrates a logical flow diagram generally showing a high-level view of one embodiment of a process for connection aggregation of network transactions. In general, the transactions include incoming requests sent by way of one client connection, or the like.


Turning to FIG. 3, process 300 begins, after a start block, at block 302, where data arriving from the connection is received. Processing next flows to block 304, at which transaction boundaries are determined. Process 300 next proceeds to block 306, at which transactions are split across one or more destination connections, which may be backend server connections. Depending on the features of the particular protocol, a transaction split may occur after a response to a first request has been processed. Processing then returns to a calling process to perform other actions.


As shown in FIG. 4, process 400 begins, after a start block, at block 402, at which transactions arriving from multiple connections are determined. Processing next flows to block 404, where multiple determined transactions are allocated to the same destination connection. Depending on the features of the particular protocol, the determination of a transaction may occur after a response to a first request has been processed. Process 400 then returns to a calling process to perform other actions.



FIG. 5 is a logical flow diagram generally showing a high-level view of one embodiment of a process for configuring network traffic management to support connection splitting and/or connection aggregation for a particular protocol. Following a start block, process 500 begins at block 502, at which a traffic management rule is provided. The rule is adapted to the features and requirements of a particular protocol. The rule may be defined by way of a script that is provided to an interpreter, such as traffic management rules interpreter 220. Next, process 500 flows to block 504, where operations are performed on network traffic in accordance with the configuration rule to enable connection splitting and/or connection aggregation of network transactions. Processing then returns to a calling process to perform other actions.



FIG. 6 illustrates a logical flow diagram generally showing one embodiment of a process for providing connection splitting for an arbitrary protocol, with respect to client requests. Process 600 begins, after a start block, at block 602, where data is received from a client connection. Processing flows next to decision block 604, where a determination is made whether the protocol is one that is known to employ fixed-length headers. If not, processing branches to block 606; otherwise, processing proceeds directly to block 608. At block 606, received data, which may be stored in a buffer, is inspected to determine a header length specifier, after which process 600 steps to block 608.


At block 608, the header for the incoming request is examined by inspecting header_length bytes of the received data. If the header_length bytes are received in multiple segments, the received data is buffered until all the bytes are received. Processing then continues to block 610, where the record type and record length of the request are determined based on inspection of the header. Depending on the protocol, there may be many possible record types.


Process 600 next flows to decision block 612. In general, some records for a given protocol are session-specific. For a session-specific record, one particular backend server might have the information associated with the session, while other backend servers would not. Accordingly, at decision block 612, a determination is made whether the record type is session-specific. If not, processing steps directly to block 616. If, however, the record type is determined to be session-specific, process 600 branches to block 614, where additional inspection is performed on the header or, if necessary, the payload of the request to extract session-related information, after which processing advances to block 616.


At block 616, an appropriate server connection from among one or more server connections is selected. In some cases the appropriate server connection will already have been established. Otherwise, various considerations and criteria may be employed to select the server connection, including determining the currently least-loaded server, determining which server has the fewest connections, a round robin technique, the use of a weight or ratio, the use of an appropriate metric, or any other suitable criteria.


Process 600 continues to block 618, where record_length bytes of payload data are released from buffering and forwarded to the selected server connection. Next, processing advances to decision block 620, where a determination is made whether multiple outstanding requests and responses are supported by the protocol. If so, the process loops back to block 602 and receives further data. If, however, the protocol does not support multiple outstanding requests and responses, processing steps to block 622, where subsequent requests are buffered until a response to the processed request is completed. Alternatively, a subsequent request may be sent over a different backend connection; if in-order responses are required, the responses may be buffered as necessary. After the appropriate response is received and forwarded to the client, processing loops back to block 602 and process 600 receives additional data.



FIG. 7 illustrates a logical flow diagram generally showing one embodiment of a process for providing connection splitting for an arbitrary protocol, with respect to server responses. In some respects, process 700 is symmetrical to the client request side of connection splitting illustrated in FIG. 6. Process 700 begins, after a start block, at block 702, where data is received from a server connection. Processing then flows to decision block 704, at which it is determined whether the protocol employs fixed-length headers. If so, process 700 steps directly to block 708. Otherwise, if the protocol does not or is not known to feature fixed-length headers, process 700 branches to block 706, at which received data is buffered and inspected to determine a header length specifier, after which processing flows to block 708.


At block 708, header_length bytes of data are inspected. Next, at block 710, the record type of the response is determined. Process 700 then flows to decision block 712, where it is determined whether there is a record length. If there is not a record length, the server connection will not be reused, and processing accordingly branches to block 714, at which the server connection is closed for further transactions. Processing then advances to a return block and performs other actions.


If the determination at decision block 712 is affirmative, process 700 flows to block 716, at which an appropriate client connection is determined for forwarding the server response. Processing then advances to block 718, where record_length bytes of the response payload are released to the determined client connection. Process 700 then loops back to block 702 to receive additional data from the same server connection.



FIG. 8 illustrates a logical flow diagram generally showing a simplified view of one embodiment of a process for providing connection aggregation for an arbitrary protocol. The process is similar to, albeit different from, the connection splitting process, and is simpler in some respects. Process 800 begins, after a start block, at block 802, where data is received from a connection for client 1. Processing then flows to decision block 804, where it is determined whether the protocol employs fixed-length headers. If so, process 800 steps to block 808. However, if the determination is negative, process 800 branches to block 806, where buffered data is inspected to determine a header length specifier, following which process 800 advances to block 808.


At block 808, header_length bytes of response data are inspected. Next, at block 810, the record type and record length are determined based on the inspected header. Processing then flows to block 812, at which record_length bytes of buffered payload data are released to a particular server connection (server 1). Process 800 then advances to block 814, where any subsequent requests from the first client connection are buffered until a server response is complete. Processing flows next to decision block 816, at which it is determined whether the connection for server 1 is reusable. For example, the connection to client 1 might now be closed, or subsequent requests from client 1 might be directed to a different server. If the server 1 connection is not reusable, process 800 flows to a return block and performs other actions. If, however, the server 1 connection can be reused, processing branches to block 818, where data is received from a connection for client 2. The process loops back to decision block 804, thus enabling the connection for server 1 to be reused by client 2.


It will be appreciated by those skilled in the art that the flow diagrams of FIGS. 6-8 are simplified for illustrative purposes. For example, processes such as those illustrated in these figures may have additional logic to handle transitions for error states, unexpected connection closures, early responses, and the like.


Example Rule


Presented below is an example of a rule definition that may be employed in an embodiment of the invention. The syntax of the rule is similar to that of iRules, which enable the configuration of a device in the BIG-IP® family of traffic management devices provided by F5 Networks, Inc. of Seattle, Wash. Further information on iRules is available at http://www.f5.com. The example rule specifies several event declarations which determine when aspects of the defined rule are evaluated. As presented, the rule provides load balancing of multiple client requests on one connection across multiple backend servers for an arbitrary TCP-based protocol.


rule oneconnect_rule {


when CLIENT ACCEPTED {

    • TCP::collect
    • set response_done 0


}


when CLIENT_DATA {

    • # find the end of the request
    • set index [string first “\n\n” [TCP::payload]]
    • if {$index>0} {
      • TCP::release [expr $index+2]
      • TCP::notify request
    • }


}


when SERVER_DATA {

    • if {$response_done} {
      • TCP::notify response
      • set response_done 0
      • TCP::collect


}


# find the end of the response


set index [string first “\r\n\r\n” [TCP::payload]]


set len 0


if {$index>0} {

    • # get the content length
    • regexp {Content-Length:\s*(\d+)} [TCP::payload]→len
    • # send up the headers
    • TCP::release [expr $index+4]
    • if {[TCP::offset]==$len} {
      • # we already have the complete response
      • TCP::release $len
      • TCP::notify response
    • }
    • else {
      • # skip the remaining data
      • set len [expr $len−[TCP::offset]]
      • TCP::release
      • puts “skip $len”
      • TCP::collect 0 $len
      • set response_done 1
    • }


}


}


when USER_REQUEST {

    • puts “USER_REQUEST”


}


when USER_RESPONSE {

    • puts “USER_RESPONSE”
    • LB::detach


}


The above specification provides a complete description of the manufacture and use of the composition of the invention. Since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.

Claims
  • 1. A network device, comprising: a memory device that stores an interpreter; anda processor configured to execute the interpreter and perform actions, including: receiving a configuration rule change adapted to a protocol that is foreign to the network device where the protocol is not natively supported by the network device without the configuration rule change to the network device;employing the configuration rule change to detect a boundary of a transaction in received traffic;inspecting one or more portions of the received traffic to determine a header length of the transaction, wherein the header length is employed to determine a header of the transaction;determining a record type and a record length that correspond to the transaction based on contents of the header;managing a connection splitting using the record type, the record length, and the detected boundary of the transaction to allocate two or more requests within the received traffic on a same input connection to two or more different backend servers having different backend server connections, wherein a destination backend server is chosen from among the two or more different backend servers based on at least the configuration rule change and at least one criteria for load-balancing the destination backend server; andreusing at least one connection to the two or more backend servers for a second client application when responses by the two or more backend servers to a first client application are complete.
  • 2. The network device of claim 1, wherein the processor performs actions, further including: when a record type for the protocol is determined to be session-specific based on inspection of a protocol header record: extracting session-related information from the received traffic; andusing the extracted session-related information in part to select one of the backend server connections.
  • 3. The network device of claim 1, wherein the processor performs actions, further including: determining at least one requirement of the protocol based on an inspection of the received traffic in at least one request;when the protocol is determined to support multiple outstanding requests based on the at least one requirement, receiving additional traffic without waiting for a response to the at least one request; otherwise,buffering subsequent traffic requests until the response to the at least one request is completed.
  • 4. The network device of claim 1, further comprising: receiving subsequent traffic from one of the different backend server connection;determining when there is a record length within a header of the subsequent traffic; andwhen the record length is undetected within the header of the subsequent traffic, closing the one backend server connection to any subsequent transaction.
  • 5. The network device of claim 1, wherein the processor performs actions, further including: employing the configuration rule change to detect another boundary of another transaction; andfurther managing a multiple-client connection aggregation using the other detected boundary of the other transaction to direct two or more other requests from two or more different client devices to a same backend server connection to a particular backend server.
  • 6. The network device of claim 5, wherein managing multiple-client connection aggregation further comprises sending a subsequent request from a first client device in the two or more client devices, to a different backend server connection based on a third detected boundary of a third transaction.
  • 7. The network device of claim 1, wherein the protocol is an application-level protocol situated above a Transmission Control Protocol.
  • 8. An apparatus comprising a non-transitory computer readable medium, having computer-executable instructions stored thereon, that in response to execution by a network device, cause the network device to perform operations, comprising: receiving a configuration rule change adapted to a protocol that is foreign to the network device where the protocol is not natively supported by the network device without the configuration rule change to the network device;employing the configuration rule change to detect a boundary of a transaction in received traffic;inspecting one or more portions of the received traffic to determine a header length of the transaction, wherein the header length is employed to determine a header of the transaction;determining a record type and a record length that correspond to the transaction based on contents of the header;managing a connection splitting using the record type, the record length, and the detected boundary of the transaction to allocate two or more requests within the received traffic on a same input connection to two or more different backend servers having different backend server connections, wherein a destination backend server is chosen from among the two or more different backend servers based on at least the configuration rule change and at least one criteria for load-balancing the destination backend server; andreusing at least one connection to the two or more backend servers for a second client application when responses by the two or more backend servers to a first client application are complete.
  • 9. The apparatus of claim 8, wherein the network device performs actions, further including: when a record type for the protocol is determined to be session-specific based on inspection of a protocol header record: extracting session-related information from the received traffic; andusing the extracted session-related information in part to select one of the backend server connections.
  • 10. The apparatus of claim 8, wherein the network device performs actions, further including: determining at least one requirement of the protocol based on an inspection of the received traffic in at least one request;when the protocol is determined to support multiple outstanding requests based on the at least one requirement, receiving additional traffic without waiting for a response to the at least one request; otherwise,buffering subsequent traffic requests until the response to the at least one request is completed.
  • 11. The apparatus of claim 8, further comprising: receiving subsequent traffic from one of the different backend server connections;determining if there is a record length within a header of the subsequent traffic; andif the record length is undetected within the header of the subsequent traffic, closing the one backend server connection to any subsequent transactions.
  • 12. The apparatus of claim 8, wherein the network device performs actions, further including: employing the configuration rule change to detect an other boundary of an other transaction; andfurther managing a multiple-client connection aggregation using the other detected boundary of the other transaction to direct two or more other requests from two or more different client devices to a same backend server connection to a particular backend server.
  • 13. The apparatus of claim 8, wherein the configuration rule change is defined using a scripting language that is evaluated at runtime upon an occurrence of a defined network event.
  • 14. The apparatus of claim 8, wherein network device operates as a traffic management device that is interposed between one or more client devices and a plurality of backend servers.
  • 15. A system, comprising: a plurality of backend server devices having processors and arranged to receive requests and provide responses; anda network device having one or more processors that perform actions, including: receiving a configuration rule change adapted to a protocol that is foreign to the network device where the protocol is not natively supported by the network device without the configuration rule change to the network device;employing the configuration rule change to detect a boundary of a transaction in received traffic;inspecting one or more portions of the received traffic to determine a header length of the transaction, wherein the header length is employed to determine a header of the transaction;determining a record type and a record length that correspond to the transaction based on contents of the header;managing a connection splitting using the record type, the record length, and the detected boundary of the transaction to allocate two or more requests within the received traffic on a same input connection to two or more different backend server devices in the plurality of server devices having different backend server connections, wherein a destination backend server is chosen from among the two or more different backend servers based on at least the configuration rule change and at least one criteria for load-balancing the destination backend server; andreusing at least one connection to the two or more backend servers for a second client application when responses by the two or more backend servers to a first client application are complete.
  • 16. The system of claim 15, wherein the network device performs actions, further including: when a record type for the protocol is determined to be session-specific based on inspection of a protocol header record: extracting session-related information from the received traffic; andusing the extracted session-related information in part to select one of the backend server connections.
  • 17. The system of claim 15, wherein the network device performs actions, further including: determining at least one requirement of the protocol based on an inspection of the received traffic in at least one request;when the protocol is determined to support multiple outstanding requests based on the at least one requirement, receiving additional traffic without waiting for a response to the at least one request; otherwise,buffering subsequent traffic requests until the response to the at least one request is completed.
  • 18. The system of claim 15, further comprising: receiving subsequent traffic from one of the different backend server connections;determining if there is a record length within a header of the subsequent traffic; andif the record length is undetected within the header of the subsequent traffic, closing the one backend server connection to any subsequent transactions.
  • 19. The system of claim 15, wherein the network device performs actions, further including: employing the configuration rule change to detect an other boundary of an other transaction; andfurther managing a multiple-client connection aggregation using the other detected boundary of the other transaction to direct two or more other requests from two or more different client devices to a same backend server connection to a particular backend server.
  • 20. The system of claim 15, wherein the configuration rule change is defined using a scripting language that is evaluated at runtime upon an occurrence of a defined network event.
CROSS REFERENCE TO RELATED APPLICATIONS

This application is a Continuation of U.S. patent application Ser. No. 11/243,844, filed Oct. 5, 2005, the benefit of which are claimed under 35 U.S.C. §120, and further of a U.S. Provisional Patent Application, Ser. No. 60/707,856 filed on Aug. 12, 2005, the benefit of the earlier filing date of which is hereby claimed under 35 U.S.C. §119(e) and further incorporated by reference.

US Referenced Citations (316)
Number Name Date Kind
3689872 Sieracki Sep 1972 A
3768726 Hale et al. Oct 1973 A
4021782 Hoerning May 1977 A
4054951 Jackson et al. Oct 1977 A
4316222 Subramaniam Feb 1982 A
4386416 Giltner et al. May 1983 A
4593324 Ohkubo et al. Jun 1986 A
4626829 Hauck Dec 1986 A
4701745 Waterworth Oct 1987 A
4862167 Copeland, III Aug 1989 A
4876541 Storer Oct 1989 A
4890282 Lambert et al. Dec 1989 A
4897717 Hamilton et al. Jan 1990 A
4906991 Fiala et al. Mar 1990 A
4971407 Hoffman Nov 1990 A
4988998 O'Brien Jan 1991 A
5003307 Whiting et al. Mar 1991 A
5016009 Whiting et al. May 1991 A
5063523 Vrenjak Nov 1991 A
5109433 Notenboom Apr 1992 A
5126739 Whiting et al. Jun 1992 A
5146221 Whiting et al. Sep 1992 A
5150430 Chu Sep 1992 A
5155484 Chambers, IV Oct 1992 A
5167034 MacLean, Jr. et al. Nov 1992 A
5212742 Normile et al. May 1993 A
5249053 Jain Sep 1993 A
5280600 Van Maren et al. Jan 1994 A
5293388 Monroe et al. Mar 1994 A
5319638 Lin Jun 1994 A
5341440 Earl et al. Aug 1994 A
5367629 Chu et al. Nov 1994 A
5379036 Storer Jan 1995 A
5410671 Elgamal et al. Apr 1995 A
5414425 Whiting et al. May 1995 A
5463390 Whiting et al. Oct 1995 A
5479587 Campbell et al. Dec 1995 A
5488364 Cole Jan 1996 A
5504842 Gentile Apr 1996 A
5506944 Gentile Apr 1996 A
5539865 Gentile Jul 1996 A
5542031 Douglass et al. Jul 1996 A
5544290 Gentile Aug 1996 A
5546395 Sharma et al. Aug 1996 A
5546475 Bolle et al. Aug 1996 A
5553160 Dawson Sep 1996 A
5553242 Russell et al. Sep 1996 A
5610905 Murthy et al. Mar 1997 A
5615287 Fu et al. Mar 1997 A
5638498 Tyler et al. Jun 1997 A
5768445 Troeller et al. Jun 1998 A
5768525 Kralowetz et al. Jun 1998 A
5774715 Madany et al. Jun 1998 A
5805932 Kawashima et al. Sep 1998 A
5825890 Elgamal et al. Oct 1998 A
5850565 Wightman Dec 1998 A
5874907 Craft Feb 1999 A
5884269 Cellier et al. Mar 1999 A
5892847 Johnson Apr 1999 A
5898837 Guttman et al. Apr 1999 A
5941988 Bhagwat et al. Aug 1999 A
5951623 Reynar et al. Sep 1999 A
5991515 Fall et al. Nov 1999 A
6023722 Colyer Feb 2000 A
6052785 Lin et al. Apr 2000 A
6061454 Malik et al. May 2000 A
6070179 Craft May 2000 A
6182139 Brendel Jan 2001 B1
6185221 Aybay Feb 2001 B1
6223287 Douglas et al. Apr 2001 B1
6226687 Harriman et al. May 2001 B1
6253226 Chidambaran et al. Jun 2001 B1
6298380 Coile et al. Oct 2001 B1
6367009 Davis et al. Apr 2002 B1
6370584 Bestavros et al. Apr 2002 B1
6411986 Susai et al. Jun 2002 B1
6434618 Cohen et al. Aug 2002 B1
6452915 Jorgensen Sep 2002 B1
6584567 Bellwood et al. Jun 2003 B1
6590588 Lincke et al. Jul 2003 B2
6590885 Jorgensen Jul 2003 B1
6594246 Jorgensen Jul 2003 B1
6625150 Yu Sep 2003 B1
6628629 Jorgensen Sep 2003 B1
6629163 Balassanian Sep 2003 B1
6633835 Moran et al. Oct 2003 B1
6643259 Borella et al. Nov 2003 B1
6643701 Aziz et al. Nov 2003 B1
6650640 Muller et al. Nov 2003 B1
6654701 Hatley Nov 2003 B2
6665725 Dietz et al. Dec 2003 B1
6668327 Prabandham et al. Dec 2003 B1
6674717 Duong-van et al. Jan 2004 B1
6681327 Jardin Jan 2004 B1
6697363 Carr Feb 2004 B1
6718388 Yarborough et al. Apr 2004 B1
6754662 Li Jun 2004 B1
6754831 Brownell Jun 2004 B2
6760782 Swales Jul 2004 B1
6763384 Gupta et al. Jul 2004 B1
6766373 Beadle et al. Jul 2004 B1
6768716 Abel et al. Jul 2004 B1
6768726 Dorenbosch et al. Jul 2004 B2
6789203 Belissent Sep 2004 B1
6792461 Hericourt Sep 2004 B1
6798743 Ma et al. Sep 2004 B1
6799276 Belissent Sep 2004 B1
6829238 Tokuyo et al. Dec 2004 B2
6831923 Laor et al. Dec 2004 B1
6842462 Ramjee et al. Jan 2005 B1
6842860 Branstad et al. Jan 2005 B1
6845449 Carman et al. Jan 2005 B1
6854117 Roberts Feb 2005 B1
6895443 Aiken May 2005 B2
6928082 Liu et al. Aug 2005 B2
6934260 Kanuri Aug 2005 B1
6934848 King et al. Aug 2005 B1
6950434 Viswanath et al. Sep 2005 B1
6954462 Chu et al. Oct 2005 B1
6954780 Susai et al. Oct 2005 B2
6957272 Tallegas et al. Oct 2005 B2
6990592 Richmond et al. Jan 2006 B2
7013338 Nag et al. Mar 2006 B1
7013342 Riddle Mar 2006 B2
7023804 Younes et al. Apr 2006 B1
7047315 Srivastava May 2006 B1
7051330 Kaler et al. May 2006 B1
7068640 Kakemizu et al. Jun 2006 B2
7069438 Balabine et al. Jun 2006 B2
7092727 Li et al. Aug 2006 B1
7103045 Lavigne et al. Sep 2006 B2
7113993 Cappiello et al. Sep 2006 B1
7127524 Renda et al. Oct 2006 B1
7139792 Mishra et al. Nov 2006 B1
7139811 Lev Ran et al. Nov 2006 B2
7177311 Hussain et al. Feb 2007 B1
7181493 English et al. Feb 2007 B2
7185360 Anton et al. Feb 2007 B1
7215637 Ferguson et al. May 2007 B1
7231445 Aweya et al. Jun 2007 B1
7231657 Honarvar et al. Jun 2007 B2
7251218 Jorgensen Jul 2007 B2
7254639 Siegel et al. Aug 2007 B1
7266613 Brown et al. Sep 2007 B1
7272651 Bolding et al. Sep 2007 B1
7280471 Rajagopal et al. Oct 2007 B2
7287077 Haugh et al. Oct 2007 B2
7287082 O'Toole, Jr. Oct 2007 B1
7313627 Noble Dec 2007 B1
7315513 McCann et al. Jan 2008 B2
7321926 Zhang et al. Jan 2008 B1
7324447 Morford Jan 2008 B1
7350229 Lander Mar 2008 B1
7359974 Quinn et al. Apr 2008 B1
7362762 Williams, Jr. et al. Apr 2008 B2
7366755 Cuomo et al. Apr 2008 B1
7409450 Jorgensen Aug 2008 B2
7421515 Marovich Sep 2008 B2
7463637 Bou-Diab et al. Dec 2008 B2
7484011 Agasaveeran et al. Jan 2009 B1
7496750 Kumar et al. Feb 2009 B2
7571313 Messerges et al. Aug 2009 B2
7586851 Panigrahy et al. Sep 2009 B2
7596137 Bennett Sep 2009 B2
7599283 Varier et al. Oct 2009 B1
7619983 Panigrahy Nov 2009 B2
7623468 Panigrahy et al. Nov 2009 B2
7624436 Balakrishnan et al. Nov 2009 B2
7711857 Balassanian May 2010 B2
7721084 Salminen et al. May 2010 B2
8009566 Zuk et al. Aug 2011 B2
8270413 Weill et al. Sep 2012 B2
8584131 Wong et al. Nov 2013 B2
20010032254 Hawkins Oct 2001 A1
20010049741 Skene et al. Dec 2001 A1
20010049786 Kawan et al. Dec 2001 A1
20020025036 Sato Feb 2002 A1
20020054567 Fan May 2002 A1
20020055980 Goddard May 2002 A1
20020057678 Jiang et al. May 2002 A1
20020059428 Susai et al. May 2002 A1
20020073223 Darnell et al. Jun 2002 A1
20020085587 Mascolo Jul 2002 A1
20020103663 Bankier et al. Aug 2002 A1
20020107903 Richter et al. Aug 2002 A1
20020126671 Ellis et al. Sep 2002 A1
20020133586 Shanklin et al. Sep 2002 A1
20020141393 Eriksson et al. Oct 2002 A1
20020147916 Strongin et al. Oct 2002 A1
20020169980 Brownell Nov 2002 A1
20030018827 Guthrie et al. Jan 2003 A1
20030043755 Mitchell Mar 2003 A1
20030050974 Mani-Meitav et al. Mar 2003 A1
20030061256 Mathews et al. Mar 2003 A1
20030069973 Ganesan et al. Apr 2003 A1
20030097460 Higashiyama et al. May 2003 A1
20030097484 Bahl May 2003 A1
20030097593 Sawa et al. May 2003 A1
20030103466 McCann et al. Jun 2003 A1
20030110230 Holdsworth et al. Jun 2003 A1
20030112755 McDysan Jun 2003 A1
20030123447 Smith Jul 2003 A1
20030126029 Dastidar et al. Jul 2003 A1
20030139183 Rantalainen Jul 2003 A1
20030140230 de Jong et al. Jul 2003 A1
20030154406 Honarvar et al. Aug 2003 A1
20030169859 Strathmeyer et al. Sep 2003 A1
20030172090 Asunmaa et al. Sep 2003 A1
20030177267 Orava et al. Sep 2003 A1
20030179738 Diachina et al. Sep 2003 A1
20030214948 Jin et al. Nov 2003 A1
20030217171 Von Stuermer et al. Nov 2003 A1
20030223413 Guerrero Dec 2003 A1
20030235204 Azevedo et al. Dec 2003 A1
20040004975 Shin et al. Jan 2004 A1
20040006638 Oberlander et al. Jan 2004 A1
20040008629 Rajagopal et al. Jan 2004 A1
20040008664 Takahashi et al. Jan 2004 A1
20040008728 Lee Jan 2004 A1
20040010473 Hsu et al. Jan 2004 A1
20040015686 Connor et al. Jan 2004 A1
20040017796 Lemieux et al. Jan 2004 A1
20040034773 Balabine et al. Feb 2004 A1
20040037322 Sukonik et al. Feb 2004 A1
20040052257 Abdo et al. Mar 2004 A1
20040078491 Gormish et al. Apr 2004 A1
20040083394 Brebner et al. Apr 2004 A1
20040088585 Kaler et al. May 2004 A1
20040095934 Cheng et al. May 2004 A1
20040098619 Shay May 2004 A1
20040098620 Shay May 2004 A1
20040107360 Herrmann et al. Jun 2004 A1
20040148425 Haumont et al. Jul 2004 A1
20040177276 MacKinnon et al. Sep 2004 A1
20040193513 Pruss et al. Sep 2004 A1
20040218603 Lee et al. Nov 2004 A1
20040225810 Hiratsuka Nov 2004 A1
20040225880 Mizrah Nov 2004 A1
20040228360 Bae et al. Nov 2004 A1
20040230695 Anschutz et al. Nov 2004 A1
20040240446 Compton Dec 2004 A1
20040255243 Vincent, III Dec 2004 A1
20040260657 Cockerham Dec 2004 A1
20040260745 Gage et al. Dec 2004 A1
20050021957 Gu Jan 2005 A1
20050049934 Nakayama et al. Mar 2005 A1
20050060295 Gould et al. Mar 2005 A1
20050063303 Samuels et al. Mar 2005 A1
20050063307 Samuels et al. Mar 2005 A1
20050071643 Moghe Mar 2005 A1
20050074007 Samuels et al. Apr 2005 A1
20050091513 Mitomo et al. Apr 2005 A1
20050108420 Brown et al. May 2005 A1
20050114700 Barrie et al. May 2005 A1
20050125692 Cox et al. Jun 2005 A1
20050132060 Mo et al. Jun 2005 A1
20050135436 Nigam et al. Jun 2005 A1
20050144278 Atamaniouk Jun 2005 A1
20050154914 Eguchi et al. Jul 2005 A1
20050160289 Shay Jul 2005 A1
20050171930 Arning et al. Aug 2005 A1
20050187979 Christensen et al. Aug 2005 A1
20050193208 Charrette, III et al. Sep 2005 A1
20050203988 Nollet et al. Sep 2005 A1
20050216555 English et al. Sep 2005 A1
20050238010 Panigrahy et al. Oct 2005 A1
20050238011 Panigrahy Oct 2005 A1
20050238012 Panigrahy et al. Oct 2005 A1
20050238022 Panigrahy Oct 2005 A1
20050265235 Accapadi et al. Dec 2005 A1
20050271048 Casey Dec 2005 A1
20050278775 Ross Dec 2005 A1
20060005008 Kao Jan 2006 A1
20060020598 Shoolman et al. Jan 2006 A1
20060026290 Pulito et al. Feb 2006 A1
20060029062 Rao et al. Feb 2006 A1
20060029063 Rao et al. Feb 2006 A1
20060029064 Rao et al. Feb 2006 A1
20060036747 Galvin et al. Feb 2006 A1
20060037071 Rao et al. Feb 2006 A1
20060041507 Novack et al. Feb 2006 A1
20060056443 Tao et al. Mar 2006 A1
20060062228 Ota et al. Mar 2006 A1
20060089994 Hayes Apr 2006 A1
20060123226 Kumar et al. Jun 2006 A1
20060153228 Stahl et al. Jul 2006 A1
20060161667 Umesawa et al. Jul 2006 A1
20060174332 Bauban et al. Aug 2006 A1
20060227802 Du et al. Oct 2006 A1
20060233166 Bou-Diab et al. Oct 2006 A1
20060235973 McBride et al. Oct 2006 A1
20060239503 Petrovic et al. Oct 2006 A1
20060242300 Yumoto et al. Oct 2006 A1
20060242688 Paramasivam et al. Oct 2006 A1
20060265689 Kuznetsov et al. Nov 2006 A1
20060276196 Jiang et al. Dec 2006 A1
20060288404 Kirshnan et al. Dec 2006 A1
20060291402 Yun et al. Dec 2006 A1
20060294366 Nadalin et al. Dec 2006 A1
20070094336 Pearson Apr 2007 A1
20070094714 Bauban et al. Apr 2007 A1
20070121615 Weill et al. May 2007 A1
20070153798 Krstulich Jul 2007 A1
20070156919 Potti et al. Jul 2007 A1
20080034127 Nishio Feb 2008 A1
20080253366 Zuk et al. Oct 2008 A1
20080270618 Rosenberg Oct 2008 A1
20080291910 Tadimeti et al. Nov 2008 A1
20080320582 Chen et al. Dec 2008 A1
20090063852 Messerges et al. Mar 2009 A1
20090077618 Pearce et al. Mar 2009 A1
20090106433 Knouse et al. Apr 2009 A1
20090252148 Dolganow et al. Oct 2009 A1
20090295905 Civanlar et al. Dec 2009 A1
20090327113 Lee et al. Dec 2009 A1
20100251343 Barrett Sep 2010 A1
Non-Patent Literature Citations (156)
Entry
“BIG-IP Controller With Exclusive OneConnect Content Switching Feature Provides a Breakthrough System for Maximizing Server and Network Performance,” F5 Networks, Inc., Press Release, May 8, 2001, accessed Jun. 4, 2002, 3 pages.
“Consistent Hashing,” Wikipedia—the free encyclopedia, http://en.wikipedia.org/w/index.php?title=Consistent—hashing&print . . . , accessed Jul. 25, 2008, 1 page.
“Control Plane,” Wikipedia—the free encyclopedia, http://en.wikipedia.org/w/index.php?title=Control—plane&printable=yes, accessed Jul. 31, 2008, 4 pages.
“Data Compression Applications and Innovations Workshop,” Proceedings of a Workshop held in conjuction with the IEEE Data Compression Conference, Mar. 31, 1995, 123 pages.
“Direct Access Storage Device Compression and Decompression Data Flow,” IBM Technical Disclosure Bulletin, vol. 38, No. 11, Nov. 1995, pp. 291-295.
“Drive Image Professional for DOS, OS/2 and Windows,” WSDC Download Guide, http://wsdcds01.watson.ibm.com/WSDC.nsf/Guides/Download/Applications-DriveImage.htm, accessed Nov. 22, 1999, 4 pages.
“Drive Image Professional,” WSDC Download Guide, http://wsdcds01.watson.ibm.com/wsdc.nsf/Guides/Download/Applications-DriveImage.htm, accessed May 3, 2001, 5 pages.
“editcap—Edit and/or translate the format of capture files,” ethereal.com, www.ethereal.com/docs/man-pages/editcap.1.html, accessed Apr. 15, 2004, 3 pages.
“ethereal—Interactively browse network traffic,” ethereal.com, www.ethereal.com/docs/man-pages/ethereal.1.html, accessed Apr. 15, 2004, 29 pages.
“FAQ: Network Intrusion Detection Systems,” robertgraham.com, Mar. 21, 2000, www.robertgraham.com/pubs/network-intrusion-detection.html, accessed Apr. 15, 2004.
“Forwarding Plane.” Wikipedia—the free encyclopedia. http://en.wikipedia.org/w/index.php?title=Forwarding—plane&printa . . . , accessed Jul. 31, 2008.
“IBM Announces New Feature for 3480 Subsystem,” Tucson Today, vol. 12, No. 337, Jul. 25, 1989, 1 page.
“IBM Technology Products Introduces New Family of High-Performance Data Compression Products,” IBM Corporation, Somers, NY, Aug. 16, 1993, 6 pages.
“Magstar and IBM 3590 High Performance Tape Subsystem Technical Guide,” IBM, Nov. 1996, 287 pages.
“Network Management,” Wikipedia—the free encyclopedia, http://en.wikipedia.org/w/index.php?title=Network—management&pr . . . , accessed Jul. 31, 2008, 3 pages.
“Network Sniffer,” linuxmigration.com, www.linuxmigration.com/quickref/admin/ethereal.html, accessed Apr. 15, 2004, 4 pages.
“Telecommunications Network,” Wikipedia—the free encyclopedia, http://en.wikipedia.org/w/index.php?title=Telecommunications—net . . . , accessed Jul. 31, 2008, 2 pages.
“tetherea—Dump and analyze network traffic,” ethereal.com, www.ethereal.com/docs/man-pages/tethereal.1.html, accessed Apr. 15, 2004, 11 pages.
About Computing & Technology, “Wireless/Networking, Nagle algorithm,” visited Dec. 6, 2005, 2 pages, http://compnetworking.about.com/od/tcpip/l/bldef—nagle.htm.
Acharya et al., “Scalable Web Request Routing with MPLS,” IBM Research Report, IBM Research Division, Dec. 5, 2001.
Adaptive Lossless Data Compression—ALDC, IBM, Jun. 15, 1994, 2 pages.
ALDC-Macro—Adaptive Lossless Data Compression, IBM Microelectronics, May 1994, 2 pages.
ALDC1-20S—Adaptive Lossless Data Compression, IBM Microelectronics, May 1994, 2 pages.
ALDC1-40S—Adaptive Lossless Data Compression, IBM Microelectronics, May 1994, 2 pages.
ALDC1-5S—Adaptive Lossless Data Compression, IBM Microelectronics, May 1994, 2 pages.
Australia's Academic and Research Network, “Programs and large MTU, Nagle algorithm,” visited Dec. 9, 2005, 3 pages, http://www.aarnet.edu.au/engineering/networkdesign/mtu/programming.html.
Berners-Lee, T. et al., “Uniform Resource Identifiers (URI): Generic Syntax,” 2396, Aug. 1998.
Berners-Lee, T. et al., RFC 1945, “Hypertext Transfer Protocol—HTTP/1.0,” May 1996, 60 pages.
Braden, R., “Requirements for Internet Hosts,” RFC 1122, Oct. 1989, 101 pages.
Bryhni et al., “A Comparison of Load Balancing Techniques for Scalable Web Servers,” IEEE Network, Jul./Aug. 2000, pp. 58-64.
Castineyra, I. et al., “The Nimrod Routing Architecture,” RFC 1992, Aug. 1996, 27 pages.
Cheng, J.M. et al., “A Fast, Highly Reliable Data Compression Chip and Algorithm for Storage Systems,” IBM, vol. 40, No. 6, Nov. 1996, 11 pages.
Costlow, T., “Sony Designs Faster, Denser Tape Drive,” Electronic Engineering Times, May 20, 1996, 2 pages.
Craft, D. J., “A Fast Hardware Data Compression Algorithm and Some Algorithmic Extensions,” IBM Journal of Research and Development, vol. 42, No. 6, Nov. 1998, 14 pages.
Craft, D. J., “Data Compression Choice No Easy Call,” Computer Technology Review, Jan. 1994, 2 pages.
Degermark, M. et al., “Low-Loss TCP/IP Header Compression for Wireless Networks,” J.C. Baltzar AG, Science Publishers, Oct. 1997, pp. 375-387.
Dierks, T. et al., RFC 2246, “The TLS Protocol, Version 1.0,” Jan. 1999, 80 pages.
Electronic Engineering Times, Issue 759, Aug. 16, 1993, 37 pages.
Electronic Engineering Times, Issue 767, Oct. 11, 1993, 34 pages.
Enger, R., et al., “FYI on a Network Management Tool Catalog: Tools for Monitoring and Debugging TCP/IP Internets and Interconnected Devices,” RFC 1470, Jun. 1993, 52 pages.
F5 Networks Delivers Blistering Application Traffic Management Performance and Unmatched Intelligence via New Packet Velocity ASIC and BIG-IP Platforms, F5 Networks, Inc. Press Release dated Oct. 21, 2002, 3 pages.
Fielding, R. et al., RFC 2616, “Hypertext Transfer Protocol—HTTP/1.1,” Jun. 1999, 114 pages.
Fielding. R. et al., “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, RFC 2068, Jan. 1997, 152 pages.
fifi.org, “Manpage of TCP,” visited Dec. 9, 2005, 6 pages, http://www.fifi.org/cgi-bin/man2html/usr/share/man/man7/tcp.7.gz.
Freier, A. et al., “The SSL Protocol Version 3.0,” IETF, Internet Draft, Nov. 18, 1996, 60 pages.
Freier, A. et al., Netscape Communications Corporation, “The SSL Protocol, Version 3.0,” Mar. 1996, 60 pages.
Hewitt. J. R. et al., “Securities Practice and Electronic Technology,” Corporate Securities Series, (New York: Law Journal Seminars-Press) 1998, title page, bibliography page, pp. 4.29-4.30.
Hinden, R. et al., “Format for Literal IPv6 Addresses in URL's,” IETF RFC 2732, Dec. 1999.
Hochmuth, P., “F5, CacheFlow Pump Up Content-Delivery Lines,” NetworkWorld, May 4, 2001, http://www.network.com/news/2001/0507cachingonline.html, accessed Jun. 1, 2005, 3 pages.
Housley, R., et al., “Internet X.509 Public Key Infrastructure Certificate and CRL Profile,” RFC 2459, Jan. 1999, 115 pages.
IBM Microelectronics Comdex Fall 1993 Booth Location, 1 page.
IP Multimedia Subsystem, http://en.wikipedia.org/w/index.php?title=IP—Multimedia—Subsyst . . . , accessed May 15, 2008, 8 pages.
Jacobson, V. et al., “TCP Extensions for High Performance,” May 1992, http://www.faqs.org/rfcs/rfc1323.html, pp. 1-37.
Kessler, G. et al., RFC 1739, “A Primer on Internet and TCP/IP Tools,” Dec. 1994, 46 pages.
Mapp, G., “Transport Protocols—What's wrong with TCP,” Jan. 28, 2004, LCE Lecture at: http://www-Ice.eng.cam.ac.uk/˜gem11, 4F5-Lecture4.pdf, pp. 1-60.
Nagle, J., RFC 896, “Congestion control in IP/TCP internetworks,” Jan. 6, 1984, 13 pages.
Nielsen. H. F. et al., “Network Performance Effects of HTTP/1.1, CSS1, and PNG,” Jun. 24, 1997, W3 Consortium http://www.w3.org/TR/NOTE-pipelining-970624$Id:Pipeline.html, v 1.48 199/10/18 19:38:45 root Exp $, pp. 1-19 Consortium.
OpenSSL, visited Apr. 12, 2006, 1 page, www.openssl.org.
Oracle Communication and Mobility Server, Aug. 2007, http://www.oracle.com/technology/products/ocms/otn—front.html, accessed May 15, 2008, 108 pages.
Paxson, V., RFC 2525, “Known TCP Implementation Problems,” Mar. 1999, 61 pages.
Postel, J., “Transmission Control Protocol,” Sep. 1981, Information Sciences Institute, University of Southern California, Marina del Rey, California, http://www.faqs.org/rfcs/rfc793.html, pp. 1-21.
Readme, Powerquest Corporation, 1994-2002, 6 pages.
Reardon, M., “A Smarter Session Switch: Arrowpoint's CS Session Switches Boast the Brains Needed for E-Commerce,” Data Communications, Jan. 1999, title page, pp. 3, 5, 18.
Rescorla, E. “SSL and TLS, Designing and Building Secure Systems”, 2001, Addison-Wesley, 46 pages.
RSA Laboratories, “PKCS #1 v2.0: RSA Cryoptography Standard,” Oct. 1, 1998, 35 pages.
Schneider, K. et al. “PPP for Data Compression in Data Circuit-Terminating Equipment (DCE),” RFC 1976, Aug. 1996m 10 pages.
Schroeder et al., “Scalable Web ServerCiustering Technologies,” IEEE Network May/Jun. 2000, pp. 38-45.
SearchNetworking.com, “Nagle's algorithm,” visited Dec. 6, 2005, 3 pages, http://searchnetworking.techtarget.com/sDefinition/0,,sid7—gc1754347,00.html.
Secure and Optimize Oracle 11i E-Business Suite with F5 Solutions, F5 Application Ready Network Guide, Oracle E-Business Suite 11i, Aug. 2007, 2 pages.
Session Initiation Protocol, http://en.wikipedia.org/w/index.php?title=Session—lnitiation—Protoc . . . , accessed May 14, 2008, 5 pages.
Simpson, W. “The Point-To-Point Protocol (PPP),” RFC 1661, Jul. 1994, 54 pages.
Stevens, W. R., “TCP/IP Illustrated,” vol. 1: The Protocols, Addison-Wesley Professional, Dec. 31, 1993, pp. 1-17.
Stevens, W., “TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms,” Jan. 1997, Sunsite.dk, http://rfc.sunsite.dk/rfc/rfc2001.html, pp. 1-6.
Tormasov, A. et al., “TCP/IP options for high-performance data transmission,” visited Dec. 9, 2005, 4 pages, http://builder.com/5100-6732-1050878.html.
Using the Universal Inspection Engine, Manual Chapter: BIG-IP Solutions Guide v4.6.2: Using the Universal Inspection Engine, 2002, 8 pages.
Valloppillil, V. et al., “Cache Array Routing Protocol v1.0,” Feb. 1998, http://icp.ircache.net/carp.txt, accessed Jul. 25, 2008, 7 pages.
W3C, “HTTP/1.1 and Nagle's Algorithm,” visited Dec. 6, 2005, 3 pages, http://www.w3.org/Protocols/HTTP/Performance/Nagle/.
Zebrose, K. L., “Integrating Hardware Accelerators into Internetworking Switches,” Telco Systems, Nov. 1993, 10 pages.
Official Communication for U.S. Appl. No. 10/172,411 mailed Dec. 16, 2005.
Official Communication for U.S. Appl. No. 10/172,411 mailed Jun. 19, 2006.
Official Communication for U.S. Appl. No. 10/172,411 mailed Sep. 1, 2006.
Official Communication for U.S. Appl. No. 10/172,411 mailed Nov. 13, 2006.
Official Communication for U.S. Appl. No. 10/172,411 mailed Apr. 30, 2007.
Official Communication for U.S. Appl. No. 10/172,411 mailed Oct. 18, 2007.
Official Communication for U.S. Appl. No. 10/172,411 mailed Apr. 25, 2008.
Official Communication for U.S. Appl. No. 10/172,411 mailed Oct. 1, 2008.
Official Communication for U.S. Appl. No. 10/409,951 mailed Jun. 16, 2006.
Official Communication for U.S. Appl. No. 10/409,951 mailed Nov. 30, 2006.
Official Communication for U.S. Appl. No. 10/409,951 mailed Mar. 8, 2007.
Official Communication for U.S. Appl. No. 10/409,951 mailed Aug. 24, 2007.
Official Communication for U.S. Appl. No. 10/409,951 mailed Jan. 4, 2008.
Official Communication for U.S. Appl. No. 10/409,951 mailed Jul. 17, 2008.
Official Communication for U.S. Appl. No. 10/719,375 mailed Jun. 18, 2008.
Official Communication for U.S. Appl. No. 10/719,375 mailed Apr. 24, 2009.
Official Communication for U.S. Appl. No. 10/719,375 mailed Nov. 13, 2009.
Official Communication for U.S. Appl. No. 10/719,375 mailed May 25, 2010.
Official Communication for U.S. Appl. No. 10/719,375 mailed Dec. 7, 2010.
Official Communication for U.S. Appl. No. 10/719,375 mailed Jun. 16, 2011.
Official Communication for U.S. Appl. No. 10/719,375 mailed Jan. 25, 2012.
Official Communication for U.S. Appl. No. 11/139,061 mailed Feb. 5, 2009.
Official Communication for U.S. Appl. No. 11/139,061 mailed Sep. 22, 2009.
Official Communication for U.S. Appl. No. 11/258,551 mailed Mar. 3, 2009.
Official Communication for U.S. Appl. No. 11/258,551 mailed Jan. 4, 2010.
Official Communication for U.S. Appl. No. 11/258,551 mailed Aug. 3, 2010.
Official Communication for U.S. Appl. No. 11/258,551 mailed Mar. 31, 2011.
Official Communication for U.S. Appl. No. 11/258,551 mailed Aug. 15, 2012.
Official Communication for U.S. Appl. No. 11/366,367 mailed Aug. 22, 2008.
Official Communication for U.S. Appl. No. 11/366,367 mailed Feb. 11, 2009.
Official Communication for U.S. Appl. No. 11/366,367 mailed Aug. 4, 2009.
Official Communication for U.S. Appl. No. 11/366,367 mailed Mar. 3, 2010.
Official Communication for U.S. Appl. No. 11/366,367 mailed Jun. 9, 2010.
Official Communication for U.S. Appl. No. 11/366,367 mailed Nov. 9, 2010.
Official Communication for U.S. Appl. No. 11/366,367 mailed Mar. 11, 2011.
Official Communication for U.S. Appl. No. 11/366,367 mailed Jun. 1, 2011.
Official Communication for U.S. Appl. No. 12/199,768 mailed Jun. 18, 2010.
Official Communication for U.S. Appl. No. 12/199,768 mailed Nov. 22, 2010.
Official Communication for U.S. Appl. No. 12/199,768 mailed Feb. 1, 2011.
Official Communication for U.S. Appl. No. 12/475,307 mailed Feb. 10, 2011.
Official Communication for U.S. Appl. No. 12/475,307 mailed Jul. 22, 2011.
Official Communication for U.S. Appl. No. 12/475,307 mailed Sep. 29, 2011.
Official Communication for U.S. Appl. No. 11/243,844 mailed Oct. 8, 2008.
Official Communication for U.S. Appl. No. 11/243,844 mailed Feb. 20, 2009.
Official Communication for U.S. Appl. No. 11/243,844 mailed Jul. 16, 2009.
Official Communication for U.S. Appl. No. 11/243,844 mailed Feb. 19, 2010.
Official Communication for U.S. Appl. No. 11/243,844 mailed Aug. 3, 2010.
Official Communication for U.S. Appl. No. 11/243,844 mailed Dec. 27, 2010.
Official Communication for U.S. Appl. No. 11/243,844 mailed Jun. 9, 2011.
Official Communication for U.S. Appl. No. 11/243,844 mailed Nov. 8, 2011.
Official Communication for U.S. Appl. No. 11/243,844 mailed Jun. 14, 2012.
Official Communication for U.S. Appl. No. 11/366,367 mailed May 18, 2012.
Official Communication in U.S. Appl. No. 10/719,375, mailed Feb. 27, 2013.
Official Communication for U.S. Appl. No. 11/243,844 mailed Nov. 26, 2012.
Official Communication for U.S. Appl. No. 11/258,551 mailed Dec. 6, 2012.
Official Communication for U.S. Appl. No. 12/199,768 mailed Dec. 16, 2013.
Official Communication for U.S. Appl. No. 10/719,375 mailed Dec. 23, 2013.
Office Communication for U.S. Appl. No. 10/719,375 mailed on Mar. 27, 2014.
Office Communication for U.S. Appl. No. 10/719,375 mailed on Dec. 23, 2013.
Office Communication for U.S. Appl. No. 12/199,768 mailed on Dec. 16, 2013.
Office Communication for U.S. Appl. No. 13/615,007 mailed on Jan. 7, 2014.
Office Communication for U.S. Appl. No. 13/174,237 mailed on Mar. 21, 2014.
Office Communication for U.S. Appl. No. 13/174,237 mailed on Sep. 25, 2014.
Office Communication for U.S. Appl. No. 12/199,768 mailed on Apr. 23, 2014.
Office Communication for U.S. Appl. No. 12/199,768 mailed on Sep. 24, 2014.
Fielding, R. et al., “Hypertext Transfer Protocol-HTTP/1.1”, Jun. 1999 pp. 1-114, W3 Consortium, http://www.w3.org/Protocols/HTTP/1.1/rfc2616.pdf.
Official Communication for U.S. Appl. No. 13/174,237 mailed Mar. 21, 2014.
Official Communication for U.S. Appl. No. 13/174,237 mailed Dec. 8, 2014.
Official Communication for U.S. Appl. No. 12/199.768 mailed Jan. 15, 2015.
Official Communication for U.S. Appl. No. 12/475,307 mailed Jan. 26, 2015.
Official Communication for U.S. Appl. No. 13/174,237 mailed Apr. 6, 2015.
Official Communication for U.S. Appl. No. 12/199,768 mailed Apr. 29, 2015.
Official Communication for U.S. Appl. No. 10/719,375 mailed Feb. 27, 2013.
Office Communication for U.S. Appl. No. 10/719,375 mailed on Jan. 29, 2015.
Official Communication for U.S. Appl. No. 11/366,367 mailed Jun. 21, 2013.
Official Communication for U.S. Appl. No. 13/229,483 mailed May 13, 2013.
Official Communication for U.S. Appl. No. 13/592,187 mailed Aug. 16, 2013.
Schneider, K. et al. “PPP for Data Compression in Data Circuit-Terminating equipment (DCE),” RFC: 1976, Aug. 1996, 10 pages.
Provisional Applications (1)
Number Date Country
60707856 Aug 2005 US
Continuations (1)
Number Date Country
Parent 11243844 Oct 2005 US
Child 13615007 US