Methods for improved transmission control protocol (TCP) performance visibility and devices thereof

Information

  • Patent Grant
  • 10412198
  • Patent Number
    10,412,198
  • Date Filed
    Friday, September 29, 2017
    7 years ago
  • Date Issued
    Tuesday, September 10, 2019
    5 years ago
Abstract
Methods, non-transitory computer readable media, network traffic management apparatuses, and network traffic management systems that generate a duration corresponding to a current one of a plurality of states in a TCP connection. The duration is generated based on a difference between a stored time recorded at a previous transition to the current one of the states and a current time. The duration is stored or output as associated with the current one of the states. The stored time recorded at the previous transition to the current one of the states is then replaced with the current time. A determination is made when one or more TCP configurations should be modified based on the duration for the current one of the states. The one or more TCP configurations are automatically modified to improve TCP performance, when the determining indicates that the one or more TCP configurations should be modified.
Description
FIELD

This technology generally relates to network traffic management and, more particularly, to methods and devices for improved Transmission Control Protocol (TCP) performance visibility.


BACKGROUND

Network devices are often misconfigured in ways that reduce Transmission Control Protocol (TCP) performance, resulting in data loss and/or slow exchanges of data across TCP connections, for example. One exemplary network device that utilizes TCP connections is a network traffic management apparatus that may perform load balancing, application acceleration, and/or security management functions on behalf of a server pool, for example, among many other possible functions. Unfortunately, network administrators do not currently have sufficient visibility with respect to TCP connections to address performance issues exhibited by network traffic management apparatuses, such as by adjusting particular TCP configuration(s).


SUMMARY

A method for improved Transmission Control Protocol (TCP) performance implemented by a network traffic management system comprising one or more network traffic management apparatuses, client devices, or server devices, the method including generating a duration corresponding to a current one of a plurality of states in a TCP connection. The duration is generated based on a difference between a stored time recorded at a previous transition to the current one of the states and a current time. The duration is stored or output as associated with the current one of the states. The stored time recorded at the previous transition to the current one of the states is then replaced with the current time. A determination is made when one or more TCP configurations should be modified based on the duration for the current one of the states. The one or more TCP configurations are automatically modified to improve TCP performance, when the determining indicates that the one or more TCP configurations should be modified.


A network traffic management apparatus, comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to generate a duration corresponding to a current one of a plurality of states in a TCP connection. The duration is generated based on a difference between a stored time recorded at a previous transition to the current one of the states and a current time. The duration is stored or output as associated with the current one of the states. The stored time recorded at the previous transition to the current one of the states is then replaced with the current time. A determination is made when one or more TCP configurations should be modified based on the duration for the current one of the states. The one or more TCP configurations are automatically modified to improve TCP performance, when the determining indicates that the one or more TCP configurations should be modified.


A non-transitory computer readable medium having stored thereon instructions for improved TCP performance comprising executable code which when executed by one or more processors, causes the processors to generate a duration corresponding to a current one of a plurality of states in a TCP connection. The duration is generated based on a difference between a stored time recorded at a previous transition to the current one of the states and a current time. The duration is stored or output as associated with the current one of the states. The stored time recorded at the previous transition to the current one of the states is then replaced with the current time. A determination is made when one or more TCP configurations should be modified based on the duration for the current one of the states. The one or more TCP configurations are automatically modified to improve TCP performance, when the determining indicates that the one or more TCP configurations should be modified.


A network traffic management system, comprising one or more network traffic management apparatuses, client devices, or server devices, the network traffic management system comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to generate a duration corresponding to a current one of a plurality of states in a TCP connection. The duration is generated based on a difference between a stored time recorded at a previous transition to the current one of the states and a current time. The duration is stored or output as associated with the current one of the states. The stored time recorded at the previous transition to the current one of the states is then replaced with the current time. A determination is made when one or more TCP configurations should be modified based on the duration for the current one of the states. The one or more TCP configurations are automatically modified to improve TCP performance, when the determining indicates that the one or more TCP configurations should be modified.


This technology has a number of associated advantages including providing methods, non-transitory computer readable media, network traffic management apparatuses, and network traffic management systems that more effectively and automatically adjust TCP configurations to improve performance of managed TCP connections. With this technology, durations that TCP connections are in particular states can be determined and analyzed to identify particular configurations that could be adjusted to improve performance. As a result, this technology facilitates reduced latency and improved network communications.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an exemplary network traffic management system with an network traffic management apparatus;



FIG. 2 is a block diagram of an exemplary network traffic management apparatus;



FIG. 3 is a flowchart of an exemplary method for determining state durations in a Transmission Control Protocol (TCP) connection; and



FIG. 4 is a flowchart of an exemplary method for determining whether a TCP connection has transitioned to a new state.





DETAILED DESCRIPTION

Referring to FIG. 1, an exemplary network traffic management system 10, which incorporates an exemplary network traffic management apparatus 12, is illustrated. In this example, the network traffic management apparatus 12 is coupled to a plurality of server devices 14(1)-14(n) and a plurality of client devices 16(1)-16(n) via communication network(s) 18, although the network traffic management apparatus 12, server devices 14(1)-14(n), and/or client devices 16(1)-16(n) may be coupled together via other topologies. Additionally, the network traffic management system 10 may include other network devices such as one or more routers and/or switches, for example, which are well known in the art and thus will not be described herein. This technology provides a number of advantages including methods, non-transitory computer readable media, network traffic management systems, and network traffic management apparatuses that analyze TCP connections and associated state changes to facilitate improved TCP performance.


Referring to FIGS. 1-2, the network traffic management apparatus 12 of the network traffic management system 10 may perform any number of functions including managing network traffic, load balancing network traffic across the server devices 14(1)-14(n), and/or accelerating network traffic associated with applications hosted by the server devices 14(1)-14(n), for example. The network traffic management apparatus 12 in this example includes one or more processors 20, a memory 22, and/or a communication interface 24, which are coupled together by a bus 26 or other communication link, although the network traffic management apparatus 12 can include other types and/or numbers of elements in other configurations.


The processor(s) 20 of the network traffic management apparatus 12 may execute programmed instructions stored in the memory 22 of the network traffic management apparatus 12 for the any number of the functions identified above. The processor(s) 20 of the network traffic management apparatus 12 may include one or more CPUs or general purpose processors with one or more processing cores, for example, although other types of processor(s) can also be used.


The memory 22 of the network traffic management apparatus 12 stores these programmed instructions for one or more aspects of the present technology as described and illustrated herein, although some or all of the programmed instructions could be stored elsewhere. A variety of different types of memory storage devices, such as random access memory (RAM), read only memory (ROM), hard disk, solid state drives, flash memory, or other computer readable medium which is read from and written to by a magnetic, optical, or other reading and writing system that is coupled to the processor(s) 20, can be used for the memory 22.


Accordingly, the memory 22 of the network traffic management apparatus 12 can store one or more applications that can include computer executable instructions that, when executed by the network traffic management apparatus 12, cause the network traffic management apparatus 12 to perform actions, such as to transmit, receive, or otherwise process messages, for example, and to perform other actions described and illustrated below with reference to FIGS. 3-4. The application(s) can be implemented as modules or components of other applications. Further, the application(s) can be implemented as operating system extensions, module, plugins, or the like.


Even further, the application(s) may be operative in a cloud-based computing environment. The application(s) can be executed within or as virtual machine(s) or virtual server(s) that may be managed in a cloud-based computing environment. Also, the application(s), and even the network traffic management apparatus 12 itself, may be located in virtual server(s) running in a cloud-based computing environment rather than being tied to one or more specific physical network computing devices. Also, the application(s) may be running in one or more virtual machines (VMs) executing on the network traffic management apparatus 12. Additionally, in one or more embodiments of this technology, virtual machine(s) running on the network traffic management apparatus 12 may be managed or supervised by a hypervisor.


In this particular example, the memory 22 of the network traffic management apparatus 12 includes a TCP monitor module 28 and a TCP statistics table 30, although the memory 22 can include other policies, modules, databases, tables, data structures, or applications, for example. The TCP monitor module 28 in this example is configured to monitor TCP connections with the client devices 16(1)-16(n) and/or server devices 14(1)-14(n). In particular, the TCP monitor module 28 is configured to identify state transitions associated with TCP connections and the duration that the network traffic management apparatus 12 is in each state, as described and illustrated in more detail later.


The TCP statistics table 30 in this example stores the duration associated with each state as well as a time of a previous transition for each monitored TCP connection. Optionally, the TCP statistics table 30 can facilitate reporting of the durations and/or transitions, which can be aggregated by entities and combined with an application visibility and reporting module of the network traffic management apparatus 12 storing additional performance information and/or statistics. Other information can also be stored in the TCP statistics table 30 and TCP state duration data can also be stored in other manners in other examples.


The communication interface 24 of the network traffic management apparatus 12 operatively couples and communicates between the network traffic management apparatus 12, the server devices 14(1)-14(n), and/or the client devices 16(1)-16(n), which are all coupled together by the communication network(s) 18, although other types and/or numbers of communication networks or systems with other types and/or numbers of connections and/or configurations to other devices and/or elements can also be used.


By way of example only, the communication network(s) 18 can include local area network(s) (LAN(s)) or wide area network(s) (WAN(s)), and can use TCP/IP over Ethernet and industry-standard protocols, although other types and/or numbers of protocols and/or communication networks can be used. The communication network(s) 18 in this example can employ any suitable interface mechanisms and network communication technologies including, for example, teletraffic in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Ethernet-based Packet Data Networks (PDNs), combinations thereof, and the like. The communication network(s) 18 can also include direct connection(s) (e.g., for when a device illustrated in FIG. 1, such as the network traffic management apparatus 12, one or more of the client devices 16(1)-16(n), or one or more of the server devices 14(1)-14(n) operate as virtual instances on the same physical machine).


While the network traffic management apparatus 12 is illustrated in this example as including a single device, the network traffic management apparatus 12 in other examples can include a plurality of devices or blades each having one or more processors (each processor with one or more processing cores) that implement one or more steps of this technology. In these examples, one or more of the devices can have a dedicated communication interface or memory. Alternatively, one or more of the devices can utilize the memory, communication interface, or other hardware or software components of one or more other devices included in the network traffic management apparatus 12.


Additionally, one or more of the devices that together comprise the network traffic management apparatus 12 in other examples can be standalone devices or integrated with one or more other devices or apparatuses, such as one of the server devices 14(1)-14(n), for example. Moreover, one or more of the devices of the network traffic management apparatus 12 in these examples can be in a same or a different communication network including one or more public, private, or cloud networks, for example.


Each of the server devices 14(1)-14(n) of the network traffic management system 10 in this example includes one or more processors, a memory, and a communication interface, which are coupled together by a bus or other communication link, although other numbers and/or types of network devices could be used. The server devices 14(1)-14(n) in this example process requests received from the client devices 16(1)-16(n) via the communication network(s) 18 according to the HTTP-based application RFC protocol, for example. Various applications may be operating on the server devices and transmitting data (e.g., files or Web pages) to the client devices 16(1)-16(n) via the network traffic management apparatus 12 in response to requests from the client devices 16(1)-16(n). The server devices 14(1)-14(n) may be hardware or software or may represent a system with multiple servers in a pool, which may include internal or external networks.


Although the server devices 14(1)-14(n) are illustrated as single devices, one or more actions of each of the server devices 14(1)-14(n) may be distributed across one or more distinct network computing devices that together comprise one or more of the server devices 14(1)-14(n). Moreover, the server devices 14(1)-14(n) are not limited to a particular configuration. Thus, the server devices 14(1)-14(n) may contain a plurality of network computing devices that operate using a master/slave approach, whereby one of the network computing devices of the server devices 14(1)-14(n) operate to manage and/or otherwise coordinate operations of the other network computing devices. The server devices 14(1)-14(n) may operate as a plurality of network computing devices within a cluster architecture, a peer-to peer architecture, virtual machines, or within a cloud architecture, for example.


Thus, the technology disclosed herein is not to be construed as being limited to a single environment and other configurations and architectures are also envisaged. For example, one or more of the server devices 14(1)-14(n) can operate within the network traffic management apparatus 12 itself rather than as a stand-alone server device communicating with the network traffic management apparatus 12 via the communication network(s). In this example, the one or more server devices 14(1)-14(n) operate within the memory 22 of the network traffic management apparatus 12.


The client devices 16(1)-16(n) of the network traffic management system 10 in this example include any type of computing device that can send and receive packets using TCP connections via the communication network(s) 18, such as mobile computing devices, desktop computing devices, laptop computing devices, tablet computing devices, virtual machines (including cloud-based computers), or the like. Each of the client devices 16(1)-16(n) in this example includes a processor, a memory, and a communication interface, which are coupled together by a bus or other communication link, although other numbers and/or types of network devices could be used.


The client devices 16(1)-16(n) may run interface applications, such as standard Web browsers or standalone client applications, which may provide an interface to make requests for, and receive content stored on, one or more of the server devices 14(1)-14(n) via the communication network(s) 18. The client devices 16(1)-16(n) may further include a display device, such as a display screen or touchscreen, and/or an input device, such as a keyboard for example.


Although the exemplary network traffic management system 10 with the network traffic management apparatus 12, server devices 14(1)-14(n), client devices 16(1)-16(n), and communication network(s) 18 are described and illustrated herein, other types and/or numbers of systems, devices, components, and/or elements in other topologies can be used. It is to be understood that the systems of the examples described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the examples are possible, as will be appreciated by those skilled in the relevant art(s).


One or more of the components depicted in the network traffic management system 10, such as the network traffic management apparatus 12, client devices 16(1)-16(n), or server devices 14(1)-14(n), for example, may be configured to operate as virtual instances on the same physical machine. In other words, one or more of the network traffic management apparatus 12, client devices 16(1)-16(n), or server devices 14(1)-14(n) may operate on the same physical device rather than as separate devices communicating through communication network(s). Additionally, there may be more or fewer network traffic management apparatuses, client devices, or server devices than illustrated in FIG. 1. The client devices 16(1)-16(n) could also be implemented as applications on the network traffic management apparatus 12 itself as a further example.


In addition, two or more computing systems or devices can be substituted for any one of the systems or devices in any example. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the examples. The examples may also be implemented on computer system(s) that extend across any suitable network using any suitable interface mechanisms and traffic technologies, including by way of example only teletraffic in any suitable form (e.g., voice and modem), wireless traffic networks, cellular traffic networks, Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof.


The examples may also be embodied as one or more non-transitory computer readable media having instructions stored thereon for one or more aspects of the present technology as described and illustrated by way of the examples herein. The instructions in some examples include executable code that, when executed by one or more processors, cause the processors to carry out steps necessary to implement the methods of the examples of this technology that are described and illustrated herein.


An exemplary method of facilitating improved TCP connection performance will now be described with reference to FIGS. 1-4. Referring more specifically to FIG. 3, an exemplary method for determining state durations in a TCP connection is illustrated. Although steps 300-314 are described and illustrated herein as being performed by the network traffic management apparatus 12 of the network traffic management system 10, one or more other devices of the network traffic management system 10, such as one or more of the client devices 16(1)-16(n) or server devices 14(1)-14(n), for example, can also perform one or more of steps 300-314 in other examples.


In step 300 in this example, the network traffic management apparatus 12 determines whether a new TCP connection has been initiated with a remote network device, such as one of the client devices 16(1)-16(n) or one of the server devices 14(1)-14(n), for example. In some examples, a synchronization packet or SYN message received by the network traffic management apparatus 12 can initiate a three way handshake and a new TCP connection. If the network traffic management apparatus 12 determines that a new TCP connection has not been initiated, then the No branch is taken back to step 300 and the network traffic management apparatus 12 effectively waits for a new TCP connection to be initiated. However, if the network traffic management apparatus 12 determines in step 300 that a new TCP connection has been initiated, then the Yes branch is taken to step 302.


In step 302 in this example, the network traffic management apparatus 12 determines whether an event has occurred with respect to the TCP connection. In some examples, the event can be one or more of sending a packet via the TCP connection, receiving data at the TCP stack from an upper layer, expiration of a TCP timer, or receiving a packet via the TCP connection, such as an acknowledgement packet or ACK message, although other types of events can be used in other examples. If the network traffic management apparatus 12 determines that an event has not occurred with respect to the TCP connection, then the No branch is taken back to step 302 and the network traffic management apparatus effectively waits for an event. However, if the network traffic management apparatus 12 determines in step 302 that an event has occurred with respect to the TCP connection, then the Yes branch is taken to step 304.


In step 304 in this example, the network traffic management apparatus 12 determines whether the event corresponds with a reset packet or RST message that has been sent or received, which indicates an end to the statistical data collection for the TCP connection, which is described and illustrated in more detail later. If the network traffic management apparatus 12 determines that the event is not the sending or receiving of an RST message, then the No branch is taken to step 306.


In step 306 in this example, the network traffic management apparatus 12 determines whether the event corresponds with an acknowledgement, by the remote network device associated with the TCP connection, of a finish packet or FIN message sent by the network traffic management apparatus. If a FIN message has been acknowledged by the remote network device, then the TCP connection is effectively closed, and the statistical data collection for the TCP connection, which is described and illustrated in more detail later, can be stopped. Accordingly, if the network traffic management apparatus 12 determines that the event is not the receiving of an acknowledgement of a sent FIN message, then the No branch is taken to step 308.


In step 308 in this example, the network traffic management apparatus determines whether the event corresponds with a transition to a new state in the TCP connection. One exemplary method for determining whether the TCP connection has transitioned to a new state is described and illustrated in more detail later with reference to FIG. 4. Exemplary states include three way handshake, rate pace, receive window, congestion window, send buffer, Nagle, retransmission, wait for acknowledgement, closing, and application states, although other numbers and types of states can be used in other examples.


If the network traffic management apparatus 12 determines that the TCP connection has not transitioned to a new state based on the event, then the No branch is taken back to step 302 and the network traffic management apparatus 12 again waits for another event to occur with respect to the TCP connection. However, in this particular example, if the network traffic management apparatus 12 determines in step 308 that the TCP connection has transitioned to a new state, then the Yes branch is taken to step 310. In other examples, the network traffic management apparatus 12 may exit step 308 based on a timer elapsing. In these examples, statistics can be generated, as described and illustrated in more detail later, even though state transition are not occurring.


In step 310 in this example, the network traffic management apparatus 12 generates a duration corresponding to a current state. In some examples, the duration is generated based on a difference between a stored time of a previous transition to the current state and a current time. Accordingly, the network traffic management apparatus 12 stores in the memory 22 a timestamp associated with an immediately prior state transition to a current state, which can be compared to a current time to determine a duration in which the TCP connection was in the current state before the TCP connection transitioned to the new state. The network traffic management apparatus 12 can store the generated duration in the TCP statistics table 30 along with an indication of the current state and, optionally, an indication of the TCP connection or one or more parameters associated with the TCP connection, for example.


In step 312, the network traffic management apparatus 12 replaces the stored time of the previous state transition with the current time so that in a subsequent iteration the network traffic management apparatus 12 can determine in step 310 a duration in which the TCP connection was in the new state. Subsequent to replacing the stored time of the previous state transition, the network traffic management apparatus 12 proceeds back to step 302 and again waits for a determination that an event has occurred with respect to the TCP connection.


Referring back to steps 304 and 306, if the network traffic management apparatus 12 determines that the event is the sending or receiving of an RST message or the receiving of an acknowledgement of a sent FIN message, respectively, then one of the Yes branches is taken to step 314. In step 314, the network traffic management apparatus 12 aggregates or reports durations for each of the states that the TCP connection was in while the TCP connection was open, which are maintained in the TCP statistics table 30.


Accordingly, the network traffic management apparatus 12 can aggregate the total time a TCP connection was in a particular state for states to which the TCP connection transitioned. In another example, the network traffic management apparatus can aggregate the duration statistics for a particular TCP connection with one or more other TCP connections associated with a same entity (e.g., geographic location or IP address of remote network device), such as by providing the duration statistics to an application, visibility, and reporting module of the network traffic management apparatus 12 for example.


In yet other examples, the network traffic management apparatus 12 can output the duration statistics to a file or other data structure that is accessible to an administrator, such as in response to a query via a provided interface. The duration statistics can also be aggregated or reported in different ways in other examples. While the duration statistics are aggregated or reported following the closing of a TCP connection in this example, the duration statistics can also be aggregated or reported at specified time intervals or at other times in other examples.


In step 316, the network traffic management apparatus 12 optionally determines whether a duration for any of the states exceeds a configurable threshold (e.g., percentage of total connection time for the TCP connection). In some examples, the network traffic management apparatus 12 can execute a daemon or other process that analyzes the accumulated durations for each state with respect to a stored policy of configurable thresholds. Accordingly, if the network traffic management apparatus 12 determines that a threshold is exceeded for at least one state, then the Yes branch is taken to step 318.


In step 318, the network traffic management apparatus 12 optionally modifies one or more TCP configurations automatically or based on a stored policy. In one particular example, the network traffic management apparatus 12 can disable Nagle's algorithm when the duration associated with the Nagle state for one or more TCP connections exceeds a configurable threshold of 5% of the total connection time for the TCP connection(s). Other types of thresholds, policies, and automated modifications to the TCP configuration(s) can also be used in other examples.


Subsequent to modifying the TCP configuration(s), or if the network traffic management apparatus 12 determines in step 316 that a threshold is not exceeded for any of the states and the No branch is taken, then the network traffic management apparatus 12 proceeds back to step 300. Additionally, steps 302-318 can be performed in parallel for any number of TCP connections in other examples.


Referring more specifically to FIG. 4, an exemplary method for determining whether a TCP connection has transitioned to a new state is illustrated. While FIG. 4 illustrates a plurality of exemplary monitored states for which durations can be determined, any number of states can be monitored in a TCP connection in other examples. Additionally, FIG. 4 illustrates an exemplary order in which state transitions are analyzed, although the conditions for a state transition can be analyzed in other orders in other examples. Although steps 400-442 are described and illustrated herein as being performed by the network traffic management apparatus 12, one or more other devices of the network traffic management system 10, such as one or more of the client devices 16(1)-16(n) or server devices 14(1)-14(n), for example, can also perform one or more of steps 400-442 in other examples.


In step 400 in this example, the network traffic management apparatus 12 determines whether an ACK message has been received from a remote network device in response to a SYN message sent to the remote network device. The remote network device in this example can be one of the client devices 16(1)-16(n), one of the server devices 14(1)-14(n) or any other network device within or external to the network traffic management system 10. If the network traffic management apparatus 12 determines that it has not received an ACK message from a remote network device in response to a SYN message, then the No branch is taken to step 402.


In step 402, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a three way handshake state. In this example, the three way handshake is an initial state, which cannot be transitioned to from any other state. Accordingly, the network traffic management apparatus 12 can set or store an initial time of transition to the three way handshake based on a current time, which can be used to determine the duration the TCP connection is in the three way handshake state, as described and illustrated in more detail earlier with reference to steps 310-312.


However, if the network traffic management apparatus 12 determines in step 400 that it has received an ACK message from a remote network device in response to a SYN message sent to the remote network device, then the Yes branch is taken to step 404. In step 404, the network traffic management apparatus 12 determines whether the most recent sent packet is not associated with the highest sequence number that has been observed for the TCP connection.


In this example, the network traffic management apparatus 12 maintains, such as in the memory 22, an indication of the sequence number that have been used in the context of the TCP connection. Accordingly, if the network traffic management apparatus 12 determines that a sent packet is not associated with a highest sequence number, then the sent packet is a retransmission and the No branch is taken to step 406. In step 406, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a retransmission state.


However, if the network traffic management apparatus 12 determines in step 404 that the most recent sent packet is associated with the highest sequence number that has been observed for the TCP connection, then the Yes branch is taken to step 408 and, optionally, the stored highest sequence number is incremented. In step 408, the network traffic management apparatus 12 determines whether it has sent all available data or whether there is data waiting in the memory 22 to be sent via the TCP connection to the remote network device. If the network traffic management apparatus 12 determines that all available data has been sent, then the Yes branch is taken to step 410.


In step 410, the network traffic management apparatus 12 determines whether all of the data sent to the remote network device has been acknowledged. If the network traffic management apparatus 12 determines that all of the data has been acknowledged, then the Yes branch is taken to step 412. In step 412, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a wait for acknowledgement state. However, if the network traffic management apparatus 12 determines in step 410 that all of the data sent to the remote network device has not been acknowledged, then the No branch is taken to step 414.


In step 414, the network traffic management apparatus 12 determines whether a FIN message has been sent to the remote network device. If all of the data sent to the remote network device has been acknowledged, but a FIN message has not been sent, then the network traffic management apparatus 12 is waiting for additional data, such as from an upper layer or an application.


Accordingly, if the network traffic management apparatus 12 determines that a FIN message has been sent, then the Yes branch is taken to step 416. In step 416, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a closing state, indicating that the network traffic management apparatus 12 is waiting on an acknowledgement of the FIN message and for the TCP connection to close. If the FIN message has been acknowledged, then the network traffic management apparatus 12 does not determine whether the TCP connection has transitioned to a new state based on the analysis of FIG. 4, as described and illustrated in more detail earlier with reference to step 306 of FIG. 3.


However, if the network traffic management apparatus 12 determines in step 414 that a FIN message has not been sent to the remote network device, then the No branch is taken to step 418. In step 418, the network traffic management apparatus 12 determines that the TCP connection is transitioning to an application state indicating that the network traffic management apparatus 12 is waiting on an application for more data.


Referring back to step 408, if the network traffic management apparatus 12 determines that all available data has not been sent, then the No branch is taken to step 420. In step 420, the network traffic management apparatus 12 updates a stored flight size based on a size of the data sent to the remote network device, but not yet acknowledged by the remote network device. The current flight size associated with the TCP connection, as well as the current total size of data sent but unacknowledged, can be stored in the memory 22 of the network traffic management apparatus 12, for example.


In step 420, the network traffic management apparatus 12 determines whether the flight size is less than one maximum segment size (MSS) below the last advertised receive window of the remote network device. If the network traffic management apparatus 12 determines that the flight size is less than one maximum segment size below the last advertised receive window of the remote network device, then the Yes branch is taken to step 424. In step 424, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a receive window state.


However, if the network traffic management apparatus 12 determines in step 422 that the flight size is not less than one maximum segment size (MSS) below the last advertised receive window (RWND) of the remote network device, then the No branch is taken to step 426. In step 426, the network traffic management apparatus 12 determines whether the flight size is less than one maximum segment size below a maximum send buffer size for the TCP connection.


If the network traffic management apparatus 12 determines that the flight size is less than one maximum segment size below a maximum send buffer size for the TCP connection, then the Yes branch is taken to step 428. In step 428, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a send buffer state.


However, if the network traffic management apparatus 12 determines in step 426 that the flight size is not less than one maximum segment size below a maximum send buffer size for the TCP connection, then the No branch is taken to step 430. In step 430, the network traffic management apparatus 12 determines whether the flight size is less than one maximum segment size below a size of a current congestion window for the TCP connection, such as can be determined by a congestion control algorithm that is currently implemented for the TCP connection.


If the network traffic management apparatus 12 determines that the flight size is less than one maximum segment size below the size of the current congestion window for the TCP connection, then the Yes branch is taken to step 432. In step 432, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a congestion window state.


However, if the network traffic management apparatus 12 determines in step 430 that the flight size is not less than one maximum segment size below a size of a current congestion window for the TCP connection, then the No branch is taken to step 434. In step 434, the network traffic management apparatus 12 determines whether the Nagle algorithm is enabled and there is unsent data in memory of less than one maximum segment size.


If the network traffic management apparatus 12 determines that the Nagle algorithm is enabled and there is unsent data in the memory 22 of less than one maximum segment size, then the Yes branch is taken to step 436. In step 436, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a Nagle state.


However, if the network traffic management apparatus 12 determines in step 434 that use of a Nagle algorithm is not enabled or there is not unsent data in the memory 22 of less than one maximum segment size then the No branch is taken to step 438. In step 438, the network traffic management apparatus 12 determines whether there is an active process limiting the rate at which packets exit in order to conform to link bandwidth and there is unsent data in the memory 22.


If the network traffic management apparatus 12 determines that there is an active process limiting the rate at which packets exit in order to conform to link bandwidth and there is unsent data in the memory 22, then the Yes branch is taken to step 440. In step 440, the network traffic management apparatus 12 determines that the TCP connection is transitioning to a rate pace state.


However, if the network traffic management apparatus 12 determines in step 438 that there is an active process limiting the rate at which packets exit in order to conform to link bandwidth or there is no unsent data in the memory 22, then the No branch is taken to step 442. In step 442, the network traffic management apparatus 12 determines that the TCP connection is not transitioning states.


Accordingly, in this particular example, steps 400-442 are performed by the network traffic management apparatus 12 for each determination of an event, as described and illustrated earlier with reference to step 302 of FIG. 3, for which the conditions described and illustrated earlier with reference to steps 304 and 306 of FIG. 3 are not satisfied. Other types and numbers of conditions or states or other methods of determining whether a state transition has occurred in a TCP connection can also be used in other examples.


By generating and providing statistics regarding the duration that TCP connection(s) are in particular states, network administrators can more effectively identify the delays that may be occurring in the TCP connection(s). Additionally, by identifying the types of delays and associated durations, administrators can advantageously adjust, and/or network traffic management apparatuses can automatically adjust, TCP configuration(s) in order to reduce the delays (e.g., disable the Nagle algorithm), and thereby improve TCP performance.


Having thus described the basic concept of the invention, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the invention. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.

Claims
  • 1. A method for improved transmission control protocol (TCP) performance implemented by a network traffic management system comprising one or more network traffic management apparatuses, client devices, or server devices, and the method comprising: monitoring a TCP connection to determine when an event and a current transition from a previous state to a new state have occurred in the TCP connection and, when the determination indicates that the event and the current transition from the previous state to the new state have occurred in the TCP connection: generating a duration corresponding to the previous state;determining, based on the generated duration, when one or more TCP configurations require modification; andautomatically modifying the one or more TCP configurations to improve TCP performance, when the determination indicates that the one or more TCP configurations require modification.
  • 2. The method of claim 1, further comprising: generating the duration based on a difference between a stored time recorded at a previous transition to the previous state and a current time of the current transition; andreplacing the stored time with the current time.
  • 3. The method of claim 1, wherein the event comprises one or more of sending a packet, receiving data from an upper layer, expiration of a TCP timer, or receiving another packet comprising an acknowledgement.
  • 4. The method of claim 1, further comprising storing or outputting the generated duration as associated with the previous state.
  • 5. The method of claim 1, further comprising: determining when a reset packet has been received or an acknowledgement packet has been received in response to a finish packet; andoutputting the stored duration or another stored duration corresponding to one or more other states, when the determination indicates that the reset packet has been received or the acknowledgement packet has been received in response to the finish packet.
  • 6. A network traffic management apparatus, comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to: monitor a TCP connection to determine when an event and a current transition from a previous state to a new state have occurred in the TCP connection and, when the determination indicates that the event and the current transition from the previous state to the new state have occurred in the TCP connection: generate a duration corresponding to the previous state;determine, based on the generated duration, when one or more TCP configurations require modification; andautomatically modify the one or more TCP configurations to improve TCP performance, when the determination indicates that the one or more TCP configurations require modification.
  • 7. The network traffic management apparatus of claim 6, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: generate the duration based on a difference between a stored time recorded at a previous transition to the previous state and a current time of the current transition; andreplace the stored time with the current time.
  • 8. The network traffic management apparatus of claim 6, wherein the event comprises one or more of sending a packet, receiving data from an upper layer, expiration of a TCP timer, or receiving another packet comprising an acknowledgement.
  • 9. The network traffic management apparatus of claim 6, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to store or output the generated duration as associated with the previous state.
  • 10. The network traffic management apparatus of claim 6, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: determine when a reset packet has been received or an acknowledgement packet has been received in response to a finish packet; andoutput the stored duration or another stored duration corresponding to one or more other states, when the determination indicates that the reset packet has been received or the acknowledgement packet has been received in response to the finish packet.
  • 11. A non-transitory computer readable medium having stored thereon instructions for improved transmission control protocol (TCP) performance comprising executable code which when executed by one or more processors, causes the one or more processors to: monitor a TCP connection to determine when an event and a current transition from a previous state to a new state have occurred in the TCP connection and, when the determination indicates that the event and the current transition from the previous state to the new state have occurred in the TCP connection: generate a duration corresponding to the previous state;determine, based on the generated duration, when one or more TCP configurations require modification; andautomatically modify the one or more TCP configurations to improve TCP performance, when the determination indicates that the one or more TCP configurations require modification.
  • 12. The non-transitory computer readable medium of claim 11, wherein the executable code when executed by the one or more processors further causes the one or more processors to: generate the duration based on a difference between a stored time recorded at a previous transition to the previous state and a current time of the current transition; andreplace the stored time with the current time.
  • 13. The non-transitory computer readable medium of claim 11, wherein the event comprises one or more of sending a packet, receiving data from an upper layer, expiration of a TCP timer, or receiving another packet comprising an acknowledgement.
  • 14. The non-transitory computer readable medium of claim 11, wherein the executable code when executed by the one or more processors further causes the one or more processors to store or output the generated duration as associated with the previous state.
  • 15. The non-transitory computer readable medium of claim 11, wherein the executable code when executed by the one or more processors further causes the one or more processors to: determine when a reset packet has been received or an acknowledgement packet has been received in response to a finish packet; andoutput the stored duration or another stored duration corresponding to one or more other states, when the determination indicates that the reset packet has been received or the acknowledgement packet has been received in response to the finish packet.
  • 16. A network traffic management system, comprising one or more traffic management apparatuses, client devices, or server devices, memory comprising programmed instructions stored thereon, and one or more processors configured to be capable of executing the stored programmed instructions to, when a determination indicates that an event and a current transition from a previous state to a new state have occurred in a monitored transmission control protocol (TCP) connection: generate a duration corresponding to the previous state; andautomatically modify one or more TCP configurations to improve TCP performance, when a determination based on the generated duration indicates that the one or more TCP configurations require modification.
  • 17. The network traffic management system of claim 16, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: generate the duration based on a difference between a stored time recorded at a previous transition to the previous state and a current time of the current transition; andreplace the stored time with the current time.
  • 18. The network traffic management system of claim 16, wherein the event comprises one or more of sending a packet, receiving data from an upper layer, expiration of a TCP timer, or receiving another packet comprising an acknowledgement.
  • 19. The network traffic management system of claim 16, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to store or output the generated duration as associated with the previous state.
  • 20. The network traffic management system of claim 16, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to output the stored duration or another stored duration corresponding to one or more other states, when a determination indicates that a reset packet has been received or an acknowledgement packet has been received in response to a finish packet.
Parent Case Info

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/413,488, filed on Oct. 27, 2016, which is hereby incorporated by reference in its entirety.

US Referenced Citations (586)
Number Name Date Kind
4993030 Krakauer et al. Feb 1991 A
5218695 Noveck et al. Jun 1993 A
5282201 Frank et al. Jan 1994 A
5303368 Kotaki Apr 1994 A
5473362 Fitzgerald et al. Dec 1995 A
5511177 Kagimasa et al. Apr 1996 A
5537585 Blickenstaff et al. Jul 1996 A
5548724 Akizawa et al. Aug 1996 A
5550816 Hardwick et al. Aug 1996 A
5550965 Gabbe et al. Aug 1996 A
5583995 Gardner et al. Dec 1996 A
5586260 Hu Dec 1996 A
5590320 Maxey Dec 1996 A
5606665 Yang et al. Feb 1997 A
5623490 Richter et al. Apr 1997 A
5649194 Miller et al. Jul 1997 A
5649200 Leblang et al. Jul 1997 A
5668943 Attanasio et al. Sep 1997 A
5692180 Lee Nov 1997 A
5721779 Funk Feb 1998 A
5724512 Winterbottom Mar 1998 A
5806061 Chaudhuri et al. Sep 1998 A
5832496 Anand et al. Nov 1998 A
5832522 Blickenstaff et al. Nov 1998 A
5838970 Thomas Nov 1998 A
5862325 Reed et al. Jan 1999 A
5884303 Brown Mar 1999 A
5893086 Schmuck et al. Apr 1999 A
5897638 Lasser et al. Apr 1999 A
5905990 Inglett May 1999 A
5917998 Cabrera et al. Jun 1999 A
5920873 Van Huben et al. Jul 1999 A
5926816 Bauer et al. Jul 1999 A
5937406 Balabine et al. Aug 1999 A
5991302 Berl et al. Nov 1999 A
5995491 Richter et al. Nov 1999 A
5999664 Mahoney et al. Dec 1999 A
6012083 Savitzky et al. Jan 2000 A
6026500 Topff et al. Feb 2000 A
6029168 Frey Feb 2000 A
6029175 Chow et al. Feb 2000 A
6041365 Kleinerman Mar 2000 A
6044367 Wolff Mar 2000 A
6047129 Frye Apr 2000 A
6047356 Anderson et al. Apr 2000 A
6067558 Wendt et al. May 2000 A
6072942 Stockwell et al. Jun 2000 A
6078929 Rao Jun 2000 A
6085234 Pitts et al. Jul 2000 A
6088694 Burns et al. Jul 2000 A
6104706 Richter et al. Aug 2000 A
6128627 Mattis et al. Oct 2000 A
6128717 Harrison et al. Oct 2000 A
6154777 Ebrahim Nov 2000 A
6157950 Krishnan Dec 2000 A
6161145 Bainbridge et al. Dec 2000 A
6161185 Guthrie et al. Dec 2000 A
6181336 Chiu et al. Jan 2001 B1
6202156 Kalajan Mar 2001 B1
6223206 Dan et al. Apr 2001 B1
6233648 Tomita May 2001 B1
6237008 Beal et al. May 2001 B1
6256031 Meijer et al. Jul 2001 B1
6259405 Stewart et al. Jul 2001 B1
6260070 Shah Jul 2001 B1
6282610 Bergsten Aug 2001 B1
6289345 Yasue Sep 2001 B1
6292832 Shah et al. Sep 2001 B1
6304913 Rune Oct 2001 B1
6308162 Ouimet et al. Oct 2001 B1
6324581 Xu et al. Nov 2001 B1
6329985 Tamer et al. Dec 2001 B1
6330226 Chapman Dec 2001 B1
6330574 Murashita Dec 2001 B1
6338082 Schneider Jan 2002 B1
6339785 Feigenbaum Jan 2002 B1
6349343 Foody et al. Feb 2002 B1
6353848 Morris Mar 2002 B1
6363056 Beigi et al. Mar 2002 B1
6370527 Singhal Apr 2002 B1
6370543 Hoffert et al. Apr 2002 B2
6374263 Bunger et al. Apr 2002 B1
6389433 Bolosky et al. May 2002 B1
6389462 Cohen et al. May 2002 B1
6393581 Friedman et al. May 2002 B1
6397246 Wolfe May 2002 B1
6412004 Chen et al. Jun 2002 B1
6438595 Blumenau et al. Aug 2002 B1
6446108 Rosenberg et al. Sep 2002 B1
6466580 Leung Oct 2002 B1
6469983 Narayana et al. Oct 2002 B2
6477544 Bolosky et al. Nov 2002 B1
6487561 Ofek et al. Nov 2002 B1
6493804 Soltis et al. Dec 2002 B1
6513061 Ebata et al. Jan 2003 B1
6514085 Slattery et al. Feb 2003 B2
6516350 Lumelsky et al. Feb 2003 B1
6516351 Borr Feb 2003 B2
6542909 Tamer et al. Apr 2003 B1
6542936 Mayle et al. Apr 2003 B1
6549916 Sedlar Apr 2003 B1
6553352 Delurgio et al. Apr 2003 B2
6556997 Levy Apr 2003 B1
6556998 Mukherjee et al. Apr 2003 B1
6560230 Li et al. May 2003 B1
6578069 Hopmann et al. Jun 2003 B1
6601101 Lee et al. Jul 2003 B1
6606663 Liao et al. Aug 2003 B1
6612490 Herrendoerfer et al. Sep 2003 B1
6615267 Whalen et al. Sep 2003 B1
6654346 Mahalingaiah et al. Nov 2003 B1
6701415 Hendren, III Mar 2004 B1
6708220 Olin Mar 2004 B1
6721794 Taylor et al. Apr 2004 B2
6728265 Yavatkar et al. Apr 2004 B1
6728704 Mao et al. Apr 2004 B2
6738357 Richter et al. May 2004 B1
6738790 Klein et al. May 2004 B1
6742035 Zayas et al. May 2004 B1
6744776 Kalkunte et al. Jun 2004 B1
6748420 Quatrano et al. Jun 2004 B1
6754215 Arikawa et al. Jun 2004 B1
6754699 Swildens et al. Jun 2004 B2
6757706 Dong et al. Jun 2004 B1
6760337 Snyder, II et al. Jul 2004 B1
6775672 Mahalingam et al. Aug 2004 B2
6775673 Mahalingam et al. Aug 2004 B2
6775679 Gupta Aug 2004 B2
6782450 Arnott et al. Aug 2004 B2
6795860 Shah Sep 2004 B1
6801960 Ericson et al. Oct 2004 B1
6826613 Wang et al. Nov 2004 B1
6839761 Kadyk et al. Jan 2005 B2
6847959 Arrouye et al. Jan 2005 B1
6847970 Keller et al. Jan 2005 B2
6850997 Rooney et al. Feb 2005 B1
6865593 Reshef et al. Mar 2005 B1
6868447 Slaughter et al. Mar 2005 B1
6871221 Styles Mar 2005 B1
6871245 Bradley Mar 2005 B2
6880017 Marce et al. Apr 2005 B1
6883137 Girardot et al. Apr 2005 B1
6889249 Miloushev et al. May 2005 B2
6914881 Mansfield et al. Jul 2005 B1
6922688 Frey, Jr. Jul 2005 B1
6928518 Talagala Aug 2005 B2
6934706 Mancuso et al. Aug 2005 B1
6938039 Bober et al. Aug 2005 B1
6938059 Tamer et al. Aug 2005 B2
6959373 Testardi Oct 2005 B2
6961815 Kistler et al. Nov 2005 B2
6970475 Fraser et al. Nov 2005 B1
6970924 Chu et al. Nov 2005 B1
6973455 Vahalia et al. Dec 2005 B1
6973490 Robertson et al. Dec 2005 B1
6973549 Testardi Dec 2005 B1
6975592 Sedddigh et al. Dec 2005 B1
6985936 Agarwalla et al. Jan 2006 B2
6985956 Luke et al. Jan 2006 B2
6986015 Testardi Jan 2006 B2
6990074 Wan et al. Jan 2006 B2
6990114 Erimli et al. Jan 2006 B1
6990547 Ulrich et al. Jan 2006 B2
6990667 Ulrich et al. Jan 2006 B2
6996841 Kadyk et al. Feb 2006 B2
6999912 Loisey et al. Feb 2006 B2
7003533 Noguchi et al. Feb 2006 B2
7003564 Greuel et al. Feb 2006 B2
7006981 Rose et al. Feb 2006 B2
7039061 Connor et al. Feb 2006 B2
7010553 Chen et al. Mar 2006 B2
7013379 Testardi Mar 2006 B1
7020644 Jameson Mar 2006 B2
7020669 McCann et al. Mar 2006 B2
7023974 Brannam et al. Apr 2006 B1
7024427 Bobbitt et al. Apr 2006 B2
7035212 Mittal et al. Apr 2006 B1
7051112 Dawson May 2006 B2
7054998 Arnott et al. May 2006 B2
7055010 Lin et al. May 2006 B2
7065482 Shorey et al. Jun 2006 B2
7072917 Wong et al. Jul 2006 B2
7075924 Richter et al. Jul 2006 B2
7076689 Atkinson Jul 2006 B2
7080314 Garofalakis et al. Jul 2006 B1
7089286 Malik Aug 2006 B1
7089491 Feinberg et al. Aug 2006 B2
7111115 Peters et al. Sep 2006 B2
7113962 Kee et al. Sep 2006 B1
7113996 Kronenberg Sep 2006 B2
7120728 Krakirian et al. Oct 2006 B2
7120746 Campbell et al. Oct 2006 B2
7127556 Blumenau et al. Oct 2006 B2
7133863 Teng et al. Nov 2006 B2
7133967 Fujie et al. Nov 2006 B2
7143146 Nakatani et al. Nov 2006 B2
7146524 Patel et al. Dec 2006 B2
7152184 Maeda et al. Dec 2006 B2
7155466 Rodriguez et al. Dec 2006 B2
7165095 Sim Jan 2007 B2
7167821 Hardwick et al. Jan 2007 B2
7171469 Ackaouy et al. Jan 2007 B2
7173929 Testardi Feb 2007 B1
7181523 Sim Feb 2007 B2
7191163 Herrera et al. Mar 2007 B2
7194579 Robinson et al. Mar 2007 B2
7228359 Monteiro Jun 2007 B1
7234074 Cohn et al. Jun 2007 B2
7236491 Tsao et al. Jun 2007 B2
7240100 Wein et al. Jul 2007 B1
7243089 Becker-Szendy et al. Jul 2007 B2
7243094 Tabellion et al. Jul 2007 B2
7263610 Parker et al. Aug 2007 B2
7269168 Roy et al. Sep 2007 B2
7269582 Winter et al. Sep 2007 B2
7272613 Sim et al. Sep 2007 B2
7280536 Testardi Oct 2007 B2
7284150 Ma et al. Oct 2007 B2
7292541 C S Nov 2007 B1
7293097 Borr Nov 2007 B2
7293099 Kalajan Nov 2007 B1
7293133 Colgrove et al. Nov 2007 B1
7296263 Jacob Nov 2007 B1
7299250 Douceur et al. Nov 2007 B2
7308475 Pruitt et al. Dec 2007 B1
7324533 DeLiberato et al. Jan 2008 B1
7330486 Ko et al. Feb 2008 B2
7340571 Saze Mar 2008 B2
7343398 Lownsbrough Mar 2008 B1
7346664 Wong et al. Mar 2008 B2
7373438 DeBergalis et al. May 2008 B1
7383288 Miloushev et al. Jun 2008 B2
7401220 Bolosky et al. Jul 2008 B2
7406484 Srinivasan et al. Jul 2008 B1
7409440 Jacob Aug 2008 B1
7415488 Muth et al. Aug 2008 B1
7415608 Bolosky et al. Aug 2008 B2
7418439 Wong Aug 2008 B2
7437358 Arrouye et al. Oct 2008 B2
7440982 Lu et al. Oct 2008 B2
7457982 Rajan Nov 2008 B2
7467158 Marinescu Dec 2008 B2
7475241 Patel et al. Jan 2009 B2
7477796 Sasaki et al. Jan 2009 B2
7509322 Miloushev et al. Mar 2009 B2
7512673 Miloushev et al. Mar 2009 B2
7519813 Cox et al. Apr 2009 B1
7532577 Park May 2009 B2
7562110 Miloushev et al. Jul 2009 B2
7571168 Bahar et al. Aug 2009 B2
7574433 Engel Aug 2009 B2
7577723 Matsuda et al. Aug 2009 B2
7587471 Yasuda et al. Sep 2009 B2
7590747 Coates et al. Sep 2009 B2
7599941 Bahar et al. Oct 2009 B2
7610307 Havewala et al. Oct 2009 B2
7610390 Yared et al. Oct 2009 B2
7624109 Testardi Nov 2009 B2
7639883 Gill Dec 2009 B2
7640347 Sloat et al. Dec 2009 B1
7644109 Manley et al. Jan 2010 B2
7653699 Colgrove et al. Jan 2010 B1
7656788 Ma et al. Feb 2010 B2
7684423 Tripathi et al. Mar 2010 B2
7685177 Hagerstrom et al. Mar 2010 B1
7689596 Tsunoda Mar 2010 B2
7694082 Golding et al. Apr 2010 B2
7698458 Lui et al. Apr 2010 B1
7711771 Kirnos May 2010 B2
7734603 McManis Jun 2010 B1
7739540 Akutsu et al. Jun 2010 B2
7743035 Chen et al. Jun 2010 B2
7752294 Meyer et al. Jul 2010 B2
7769711 Srinivasan et al. Aug 2010 B2
7788335 Miloushev et al. Aug 2010 B2
7809691 Karmarkar et al. Oct 2010 B1
7822839 Pruitt et al. Oct 2010 B1
7822939 Veprinsky et al. Oct 2010 B1
7831639 Panchbudhe et al. Nov 2010 B1
7849112 Mane et al. Dec 2010 B2
7853958 Matthew et al. Dec 2010 B2
7861085 Case et al. Dec 2010 B1
7870154 Shitomi et al. Jan 2011 B2
7877511 Berger et al. Jan 2011 B1
7885970 Lacapra Feb 2011 B2
7886218 Watson Feb 2011 B2
7895653 Calo et al. Feb 2011 B2
7900002 Lyon Mar 2011 B2
7903554 Manur et al. Mar 2011 B1
7904466 Valencia et al. Mar 2011 B1
7908245 Nakano et al. Mar 2011 B2
7913053 Newland Mar 2011 B1
7953085 Chang et al. May 2011 B2
7953701 Okitsu et al. May 2011 B2
7958222 Pruitt et al. Jun 2011 B1
7958347 Ferguson Jun 2011 B1
7984500 Khanna et al. Jul 2011 B1
8005953 Miloushev et al. Aug 2011 B2
8015157 Kamei et al. Sep 2011 B2
8024443 Jacob Sep 2011 B1
8046547 Chatterjee et al. Oct 2011 B1
8055724 Amegadzie et al. Nov 2011 B2
8064342 Badger Nov 2011 B2
8069225 McCann et al. Nov 2011 B2
8099758 Schaefer et al. Jan 2012 B2
8103622 Karinta Jan 2012 B1
8112392 Bunnell et al. Feb 2012 B1
8117244 Marinov et al. Feb 2012 B2
8155128 Balyan et al. Apr 2012 B2
8171124 Kondamuru May 2012 B2
8190769 Shukla et al. May 2012 B1
8209403 Szabo et al. Jun 2012 B2
8271751 Hinrichs, Jr. Sep 2012 B2
8302100 Deng et al. Oct 2012 B2
8326798 Driscoll et al. Dec 2012 B1
8351600 Resch Jan 2013 B2
8396836 Ferguson et al. Mar 2013 B1
8463850 McCann Jun 2013 B1
8484348 Subramanian et al. Jul 2013 B2
8560693 Wang et al. Oct 2013 B1
8601000 Stefani et al. Dec 2013 B1
20010003164 Murakami Jun 2001 A1
20010007560 Masuda et al. Jul 2001 A1
20010047293 Waller et al. Nov 2001 A1
20020012352 Hansson et al. Jan 2002 A1
20020012382 Schilling Jan 2002 A1
20020035537 Waller et al. Mar 2002 A1
20020038360 Andrews et al. Mar 2002 A1
20020059263 Shima et al. May 2002 A1
20020065848 Walker et al. May 2002 A1
20020072048 Slattery et al. Jun 2002 A1
20020087571 Stapel et al. Jul 2002 A1
20020087744 Kitchin Jul 2002 A1
20020087887 Busam et al. Jul 2002 A1
20020099829 Richards et al. Jul 2002 A1
20020099842 Jennings et al. Jul 2002 A1
20020103823 Jackson et al. Aug 2002 A1
20020106263 Winker Aug 2002 A1
20020120727 Curley Aug 2002 A1
20020120763 Miloushev et al. Aug 2002 A1
20020143819 Han et al. Oct 2002 A1
20020143852 Guo et al. Oct 2002 A1
20020150253 Brezak et al. Oct 2002 A1
20020156905 Weissman Oct 2002 A1
20020161911 Pinckney, III et al. Oct 2002 A1
20020162118 Levy et al. Oct 2002 A1
20020174216 Shorey et al. Nov 2002 A1
20020194112 dePinto et al. Dec 2002 A1
20020194342 Lu et al. Dec 2002 A1
20020198956 Dunshea et al. Dec 2002 A1
20030005172 Chessell Jan 2003 A1
20030009528 Sharif et al. Jan 2003 A1
20030012382 Ferchichi et al. Jan 2003 A1
20030018450 Carley Jan 2003 A1
20030018585 Butler et al. Jan 2003 A1
20030028514 Lord et al. Feb 2003 A1
20030033308 Patel et al. Feb 2003 A1
20030033535 Fisher et al. Feb 2003 A1
20030034905 Anton et al. Feb 2003 A1
20030051045 Connor Mar 2003 A1
20030055723 English Mar 2003 A1
20030065956 Belapurkar et al. Apr 2003 A1
20030074301 Solomon Apr 2003 A1
20030105846 Zhao et al. Jun 2003 A1
20030108000 Chaney et al. Jun 2003 A1
20030108002 Chaney et al. Jun 2003 A1
20030128708 Inoue et al. Jul 2003 A1
20030130945 Force et al. Jul 2003 A1
20030139934 Mandera Jul 2003 A1
20030156586 Lee et al. Aug 2003 A1
20030159072 Bellinger et al. Aug 2003 A1
20030171978 Jenkins et al. Sep 2003 A1
20030177364 Walsh et al. Sep 2003 A1
20030177388 Botz et al. Sep 2003 A1
20030179755 Fraser Sep 2003 A1
20030189936 Terrell et al. Oct 2003 A1
20030191812 Agarwalla et al. Oct 2003 A1
20030195813 Pallister et al. Oct 2003 A1
20030195962 Kikuchi et al. Oct 2003 A1
20030200207 Dickinson Oct 2003 A1
20030204635 Ko et al. Oct 2003 A1
20030212954 Patrudu Nov 2003 A1
20030220835 Barnes, Jr. Nov 2003 A1
20030229665 Ryman Dec 2003 A1
20030236995 Fretwell, Jr. Dec 2003 A1
20040003266 Moshir et al. Jan 2004 A1
20040006575 Visharam et al. Jan 2004 A1
20040006591 Matsui et al. Jan 2004 A1
20040010654 Yasuda et al. Jan 2004 A1
20040015783 Lennon et al. Jan 2004 A1
20040017825 Stanwood et al. Jan 2004 A1
20040028043 Maveli et al. Feb 2004 A1
20040030627 Sedukhin Feb 2004 A1
20040030740 Stelting Feb 2004 A1
20040030857 Krakirian et al. Feb 2004 A1
20040043758 Sorvari et al. Mar 2004 A1
20040044705 Stager et al. Mar 2004 A1
20040054748 Ackaouy et al. Mar 2004 A1
20040059789 Shum Mar 2004 A1
20040064544 Barsness et al. Apr 2004 A1
20040064554 Kuno et al. Apr 2004 A1
20040093361 Therrien et al. May 2004 A1
20040098595 Aupperle et al. May 2004 A1
20040122926 Moore et al. Jun 2004 A1
20040123277 Schrader et al. Jun 2004 A1
20040133577 Miloushev et al. Jul 2004 A1
20040133605 Chang et al. Jul 2004 A1
20040133606 Miloushev et al. Jul 2004 A1
20040138858 Carley Jul 2004 A1
20040139355 Axel et al. Jul 2004 A1
20040148380 Meyer et al. Jul 2004 A1
20040153479 Mikesell et al. Aug 2004 A1
20040167967 Bastian et al. Aug 2004 A1
20040199547 Winter et al. Oct 2004 A1
20040213156 Smallwood et al. Oct 2004 A1
20040215665 Edgar et al. Oct 2004 A1
20040236798 Srinivasan et al. Nov 2004 A1
20040236826 Harville et al. Nov 2004 A1
20050008017 Datta et al. Jan 2005 A1
20050021703 Cherry et al. Jan 2005 A1
20050027841 Rolfe Feb 2005 A1
20050027862 Nguyen et al. Feb 2005 A1
20050044158 Malik Feb 2005 A1
20050050107 Mane et al. Mar 2005 A1
20050091214 Probert et al. Apr 2005 A1
20050108575 Yung May 2005 A1
20050114701 Atkins et al. May 2005 A1
20050117589 Douady et al. Jun 2005 A1
20050160161 Barrett et al. Jul 2005 A1
20050165656 Frederick et al. Jul 2005 A1
20050174944 Legault et al. Aug 2005 A1
20050175013 Le Pennec et al. Aug 2005 A1
20050180419 Park Aug 2005 A1
20050187866 Lee Aug 2005 A1
20050198234 Leib et al. Sep 2005 A1
20050213587 Cho et al. Sep 2005 A1
20050234928 Shkvarchuk et al. Oct 2005 A1
20050240664 Chen et al. Oct 2005 A1
20050246393 Coates et al. Nov 2005 A1
20050256806 Tien et al. Nov 2005 A1
20050198501 Andreev et al. Dec 2005 A1
20050273456 Revanuru et al. Dec 2005 A1
20050289111 Tribble et al. Dec 2005 A1
20060010502 Mimatsu et al. Jan 2006 A1
20060031374 Lu et al. Feb 2006 A1
20060031778 Goodwin et al. Feb 2006 A1
20060045089 Bacher et al. Mar 2006 A1
20060045096 Farmer et al. Mar 2006 A1
20060047785 Wang et al. Mar 2006 A1
20060074922 Nishimura Apr 2006 A1
20060075475 Boulos et al. Apr 2006 A1
20060080353 Miloushev et al. Apr 2006 A1
20060100752 Kim et al. May 2006 A1
20060106882 Douceur et al. May 2006 A1
20060112367 Harris May 2006 A1
20060123062 Bobbitt et al. Jun 2006 A1
20060123210 Pritchett et al. Jun 2006 A1
20060130133 Andreev et al. Jun 2006 A1
20060140193 Kakani et al. Jun 2006 A1
20060153201 Hepper et al. Jul 2006 A1
20060167838 Lacapra Jul 2006 A1
20060184589 Lees et al. Aug 2006 A1
20060198300 Li Sep 2006 A1
20060200470 Lacapra et al. Sep 2006 A1
20060224636 Kathuria et al. Oct 2006 A1
20060224687 Popkin et al. Oct 2006 A1
20060230265 Krishna Oct 2006 A1
20060235998 Stecher et al. Oct 2006 A1
20060259320 LaSalle et al. Nov 2006 A1
20060268692 Wright et al. Nov 2006 A1
20060270341 Kim et al. Nov 2006 A1
20060271598 Wong et al. Nov 2006 A1
20060277225 Mark et al. Dec 2006 A1
20060282442 Lennon et al. Dec 2006 A1
20060282461 Marinescu Dec 2006 A1
20060282471 Mark et al. Dec 2006 A1
20070005807 Wong Jan 2007 A1
20070016613 Foresti et al. Jan 2007 A1
20070024919 Wong et al. Feb 2007 A1
20070027929 Whelan Feb 2007 A1
20070027935 Haselton et al. Feb 2007 A1
20070028068 Golding et al. Feb 2007 A1
20070038994 Davis et al. Feb 2007 A1
20070061441 Landis et al. Mar 2007 A1
20070088702 Fridella et al. Apr 2007 A1
20070112775 Ackerman May 2007 A1
20070124415 Lev-Ran et al. May 2007 A1
20070124502 Li May 2007 A1
20070130255 Wolovitz et al. Jun 2007 A1
20070136308 Tsirigotis et al. Jun 2007 A1
20070139227 Speirs, II et al. Jun 2007 A1
20070162891 Burner et al. Jul 2007 A1
20070168320 Borthakur et al. Jul 2007 A1
20070180314 Kawashima et al. Aug 2007 A1
20070208748 Li Sep 2007 A1
20070209075 Coffman Sep 2007 A1
20070233826 Tindal et al. Oct 2007 A1
20070250560 Wein et al. Oct 2007 A1
20080004022 Johannesson et al. Jan 2008 A1
20080010372 Khedouri et al. Jan 2008 A1
20080022059 Zimmerer et al. Jan 2008 A1
20080046432 Anderson et al. Feb 2008 A1
20080070575 Claussen et al. Mar 2008 A1
20080114718 Anderson et al. May 2008 A1
20080189468 Schmidt et al. Aug 2008 A1
20080208917 Smoot et al. Aug 2008 A1
20080208933 Lyon Aug 2008 A1
20080209073 Tang Aug 2008 A1
20080215836 Sutoh et al. Sep 2008 A1
20080222223 Srinivasan et al. Sep 2008 A1
20080243769 Arbour et al. Oct 2008 A1
20080263401 Stenzel Oct 2008 A1
20080270578 Zhang et al. Oct 2008 A1
20080281908 McCanne et al. Nov 2008 A1
20080282047 Arakawa et al. Nov 2008 A1
20080294446 Guo et al. Nov 2008 A1
20090007162 Sheehan Jan 2009 A1
20090019535 Mishra et al. Jan 2009 A1
20090037975 Ishikawa et al. Feb 2009 A1
20090041230 Williams Feb 2009 A1
20090055507 Oeda Feb 2009 A1
20090055607 Schack et al. Feb 2009 A1
20090077097 Lacapra et al. Mar 2009 A1
20090080440 Balyan et al. Mar 2009 A1
20090089344 Brown et al. Apr 2009 A1
20090094252 Wong et al. Apr 2009 A1
20090094311 Awadallah et al. Apr 2009 A1
20090106255 Lacapra et al. Apr 2009 A1
20090106263 Khalid et al. Apr 2009 A1
20090106413 Salo et al. Apr 2009 A1
20090125955 DeLorme May 2009 A1
20090132616 Winter et al. May 2009 A1
20090138314 Bruce May 2009 A1
20090161542 Ho Jun 2009 A1
20090187915 Chew et al. Jul 2009 A1
20090204649 Wong et al. Aug 2009 A1
20090204650 Wong et al. Aug 2009 A1
20090204705 Marinov et al. Aug 2009 A1
20090210431 Marinkovic et al. Aug 2009 A1
20090217163 Jaroker Aug 2009 A1
20090217386 Schneider Aug 2009 A1
20090240705 Miloushev et al. Sep 2009 A1
20090240899 Akagawa et al. Sep 2009 A1
20090241176 Beletski et al. Sep 2009 A1
20090265396 Ram et al. Oct 2009 A1
20090265467 Peles Oct 2009 A1
20090292957 Bower et al. Nov 2009 A1
20090300161 Pruitt et al. Dec 2009 A1
20090316708 Yahyaoui et al. Dec 2009 A1
20090319600 Sedan et al. Dec 2009 A1
20100017643 Baba et al. Jan 2010 A1
20100042743 Jeon et al. Feb 2010 A1
20100061232 Zhou et al. Mar 2010 A1
20100064001 Daily Mar 2010 A1
20100070476 O'Keefe et al. Mar 2010 A1
20100082542 Feng et al. Apr 2010 A1
20100093318 Zhu et al. Apr 2010 A1
20100131654 Malakapalli et al. May 2010 A1
20100179984 Sebastian Jul 2010 A1
20100205206 Rabines et al. Aug 2010 A1
20100228819 Wei Sep 2010 A1
20100242092 Harris et al. Sep 2010 A1
20100250497 Redlich et al. Sep 2010 A1
20100274772 Samuels Oct 2010 A1
20100306169 Pishevar et al. Dec 2010 A1
20100325257 Goel et al. Dec 2010 A1
20100325634 Ichikawa et al. Dec 2010 A1
20110055921 Narayanaswamy et al. Mar 2011 A1
20110066736 Mitchell et al. Mar 2011 A1
20110072321 Dhuse Mar 2011 A1
20110083185 Sheleheda et al. Apr 2011 A1
20110087696 Lacapra Apr 2011 A1
20110093471 Brockway et al. Apr 2011 A1
20110107112 Resch May 2011 A1
20110119234 Schack et al. May 2011 A1
20110185082 Thompson Jul 2011 A1
20110296411 Teng et al. Dec 2011 A1
20110320882 Beaty et al. Dec 2011 A1
20120007239 Agarwal et al. Mar 2012 A1
20120117028 Gold et al. May 2012 A1
20120144229 Nadolski Jun 2012 A1
20120150699 Trapp et al. Jun 2012 A1
20120150805 Pafumi et al. Jun 2012 A1
20130058229 Casado et al. Mar 2013 A1
20130058252 Casado et al. Mar 2013 A1
20130058255 Casado et al. Mar 2013 A1
Foreign Referenced Citations (23)
Number Date Country
2003300350 Jul 2004 AU
2080530 Apr 1994 CA
2512312 Jul 2004 CA
0605088 Jul 1996 EP
0 738 970 Oct 1996 EP
1081918 Jul 2001 EP
63010250 Jan 1988 JP
6205006 Jul 1994 JP
06-332782 Dec 1994 JP
8021924 Mar 1996 JP
08-328760 Dec 1996 JP
08-339355 Dec 1996 JP
9016510 Jan 1997 JP
11282741 Oct 1999 JP
2000-183935 Jun 2000 JP
566291 Dec 2008 NZ
WO 0058870 Oct 2000 WO
WO 0239696 May 2002 WO
WO 02056181 Jul 2002 WO
WO 2004061605 Jul 2004 WO
WO 2006091040 Aug 2006 WO
WO 2008130983 Oct 2008 WO
WO 2008147973 Dec 2008 WO
Non-Patent Literature Citations (120)
Entry
“A Storage Architecture Guide,” Second Edition, 2001, Auspex Systems, Inc., www.auspex.com, last accessed on Dec. 30, 2002.
“CSA Persistent File System Technology,” A White Paper, Jan. 1, 1999, pp. 1-3, http://www.cosoa.com/white_papers/pfs.php, Colorado Software Architecture, Inc.
“Distributed File System: A Logical View of Physical Storage: White Paper,” 1999, Microsoft Corp., www.microsoft.com, <http://www.eu.microsoft.com/TechNet/prodtechnol/windows2000serv/maintain/DFSnt95>, pp. 1-26, last accessed on Dec. 2, 2002.
“Market Research & Releases, CMPP PoC documentation”, last accessed Mar. 29, 2010, (http://mainstreet/sites/PD/Teams/ProdMgmt/MarketResearch/Universal).
“Market Research & Releases, Solstice Diameter Requirements”, last accessed Mar. 29, 2010, (http://mainstreet/sites/PD/Teams/ProdMgmt/MarketResearch/Unisversal).
“NERSC Tutorials: I/O on the Cray T3E, ‘Chapter 8, Disk Striping’,” National Energy Research Scientific Computing Center (NERSC), http://hpcfnersc.gov, last accessed on Dec. 27, 2002, 9 pages.
“Respond to server depending on TCP::client_port”, DevCentral Forums iRules, pp. 1-6, last accessed Mar. 26, 2010, (http://devcentral.f5.com/Default/aspx?tabid=53&forumid=5&tpage=1&v).
“Scaling Next Generation Web Infrastructure with Content-Intelligent Switching: White Paper,” Apr. 2000, pp. 1-9, Alteon Web Systems, Inc.
“The AFS File System in Distributed Computing Environment,” www.transarc.ibm.com/Library/whitepapers/AFS/afsoverview.html, last accessed on Dec. 20, 2002.
“VERITAS SANPoint Foundation Suite(tm) and SANPoint Foundation Suite(tm) HA: New VERITAS Volume Management and File System Technology for Cluster Environments,” Sep. 2001, 26 pages, VERITAS Software Corp.
“Windows Clustering Technologies—An Overview,” Nov. 2001, 31 pages, Microsoft Corp.
Aguilera et al., “Improving recoverability in multi-tier storage systems,” International Conference on Dependable Systems and Networks (DSN—2007), Jun. 2007, 10 pages, Edinburgh, Scotland.
Anderson et al., “Interposed Request Routing for Scalable Network Storage,” ACM Transactions on Computer Systems 20(1): (Feb. 2002), pp. 1-24.
Anderson et al., “Serverless Network File System,” in the 15th Symposium on Operating Systems Principles, Dec. 1995, 18 pages, Association for Computing Machinery, Inc.
Anonymous, “How DFS Works: Remote File Systems,” Distributed File System (DFS) Mar. 2003, 54 pages, Technical Reference retrieved from the Internet on Jan. 29, 2010, URL<http://technetmicrosoft.com/en-us/library/cc782417(WS.10,printer).aspx>.
Apple, Inc., “Mac OS X Tiger Keynote Intro. Part 2,” Jun. 2004, www.youtube.com <http://www.youtube.com/watch?v=zSBJwEmRJbY>, 1 page.
Apple, Inc., “Tiger Developer Overview Series: Working with Spotlight,” Nov. 23, 2004, www.apple.com using www.archive.org <http://web.archive.org/web/20041123005335/developer.apple.com/macosx/tiger/spotlight.html>, pp. 1-11.
Baer, T., et al., “The elements of Web services” ADTmag.com, Dec. 2002, pp. 1-6, (http://www.adtmag.com).
Basney et al., “Credential Wallets: A Classification of Credential Repositories Highlighting MyProxy,” Sep. 19-21, 2003, pp. 1-20, 31st Research Conference on Communication, Information and Internet Policy (TPRC 2003), Arlington, Virginia.
Blue Coat, “Technology Primer: CIFS Protocol Optimization,” Blue Coat Systems Inc., 2007, pp. 1-3, (http://www.bluecoat.com).
Botzum, Keys, “Single Sign On—A Contrarian View,” Aug. 6, 2001, pp. 1-8, Open Group Website, http://www.opengroup.org/security/topics.htm.
Cabrera et al., “Swift: A Storage Architecture for Large Objects,” In Proceedings of the—Eleventh IEEE Symposium on Mass Storage Systems, Oct. 1991, pp. 1-7.
Cabrera et al., “Swift: Using Distributed Disk Striping to Provide High I/O Data Rates,” Fall 1991, pp. 405-436, vol. 4, No. 4, Computing Systems.
Cabrera et al., “Using Data Striping in a Local Area Network,” 1992, 22 pages, Technical report No. UCSC-CRL-92-09 of the Computer & Information Sciences Department of University of California at Santa Cruz.
Callaghan et al., “NFS Version 3 Protocol Specifications” (RFC 1813), Jun. 1995, 127 pages, The Internet Engineering Task Force (IETN.
Carns et al., “PVFS: A Parallel File System for Linux Clusters,” in Proceedings of the Extreme Linux Track: 4th Annual Linux Showcase and Conference, Oct. 2000, pp. 317-327, Atlanta, Georgia, USENIX Association.
Cavale, M. R., “Introducing Microsoft Cluster Service (MSCS) in the Windows Server 2003”, Nov. 2002, 10 pages, Microsoft Corporation.
English Translation of Notification of Reason(s) for Refusal for JP 2002-556371 (Dispatch Date: Jan. 22, 2007).
F5 Networks Inc., “3-DNS® Reference Guide, version 4.5”, F5 Networks Inc., Sep. 2002, pp. 2-1-2-28, 3-1-3-12, 5-1-5-24, Seattle, Washington.
F5 Networks Inc., “Big-IP® Reference Guide, version 4.5”, F5 Networks Inc., Sep. 2002, pp. 11-1-11-32, Seattle, Washington.
F5 Networks Inc., “Case Information Log for ‘Issues with BoNY upgrade to 4.3’”, as early as Feb. 2008.
F5 Networks Inc., “Deploying the BIG-IP LTM for Diameter Traffic Management,” F5® Deployment Guide, Publication date Sep. 2010, Version 1.2, pp. 1-19.
F5 Networks Inc., “F5 Diameter RM”, Powerpoint document, Jul. 16, 2009, pp. 1-7.
F5 Networks Inc., “F5 WANJet CIFS Acceleration”, White Paper, F5 Networks Inc., Mar. 2006, pp. 1-5, Seattle, Washington.
F5 Networks Inc., “Routing Global Internet Users to the Appropriate Data Center and Applications Using F5's 3-DNS Controller”, F5 Networks Inc., Aug. 2001, pp. 1-4, Seattle, Washington, (http://www.f5.com/f5producs/3dns/relatedMaterials/UsingF5.html).
F5 Networks Inc., “Using F5's-DNS Controller to Provide High Availability Between Two or More Data Centers”, F5 Networks Inc., Aug. 2001, pp. 1-4, Seattle, Washington, (http://www.f5.com/f5products/3dns/relatedMaterials/3DNSRouting.html).
Fajardo V., “Open Diameter Software Architecture,” Jun. 25, 2004, pp. 1-6, Version 1.0.7.
Fan et al., “Summary Cache: A Scalable Wide-Area Protocol”, Computer Communications Review, Association Machinery, New York, USA, Oct. 1998, pp. 254-265, vol. 28, Web Cache Sharing for Computing No. 4.
Farley, M., “Enterprise Storage Forum,” Jan. 2000, 2 pages, Book Review—Building Storage Networks, 2nd Edition, http://www.enterprisestorageforum.com/sans/features/print/0,,10556_1441201.00.html, Enterprise Storage Forum Staff, last accessed Dec. 20, 2002.
Gibson et al., “File Server Scaling with Network-Attached Secure Disks,” in Proceedings of the ACM International Conference on Measurement and Modeling of Computer Systems (Sigmetrics '97), Association for Computing Machinery, Inc., Jun. 15-18, 1997, 13 pages.
Gibson et al., “NASD Scalable Storage Systems,” Jun. 1999, 6 pages, USENIX99, Extreme Linux Workshop, Monterey, California.
Gupta et al., “Algorithms for Packet Classification,” Dept. of Comput. Sci., Stanford Univ., CA 15(2):24-32 (Mar./Apr. 2001) (Abstract only).
Harrison, C., May 19, 2008 response to Communication pursuant to Article 96(2) EPC dated Nov. 9, 2007 in corresponding European patent application No. 02718824.2.
Hartman, J., “The Zebra Striped Network File System,” 1994, Ph.D. dissertation submitted in the Graduate Division of the University of California at Berkeley.
Haskin et al., “The Tiger Shark File System,” 1998, in proceedings of IEEE, Spring COMPCON, Santa Clara, CA, www.research.ibm.com, last accessed on Dec. 30, 2002.
Heinz, “Priorities in Stream Transmission Control Protocol (SCTP) Multistreaming,” Thesis submitted to the Faculty of the University of Delaware (Spring 2003).
Hwang et al., “Designing SSI Clusters with Hierarchical Checkpointing and Single 1/0 Space,” IEEE Concurrency, Jan.-Mar. 1999, pp. 60-69.
Ilvesmaki et al., “On the Capabilities of Application Level Traffic Measurements to Differentiate and Classify Internet Traffic,” Presented in SPIE's International Symposium ITCom, Denver Colorado USA (Aug. 19-21, 2001).
International Search Report for International Patent Application No. PCT/US2008/083117 (dated Jun. 23, 2009).
International Search Report for International Patent Application No. PCT/US2008/060449 (dated Apr. 9, 2008).
International Search Report for International Patent Application No. PCT/US2008/064677 (dated Sep. 6, 2009).
International Search Report for International Patent Application No. PCT/US02/00720, dated Mar. 19, 2003.
International Search Report for International Patent Application No. PCT/US2012/038228 (dated Oct. 19, 2012).
International Search Report from International Application No. PCT/US03/41202, dated Sep. 15, 2005.
Internet Protocol, “Darpa Internet program Protocol Specification,” (RFC:791) at http://www.ietf.org/rfc/rfc791.txt, by Information Sciences Institute University of Southern California, Marina del Rey, CA, for Defense Advanced Research Project Agency Information Processing Techniques Office, Arlington, VA, pp. 1-49 (Sep. 1981).
Karamanolis, C. et al., “An Architecture for Scalable and Manageable File Services,” HPL-2001-173, Jul. 26, 2001. pp. 1-14.
Katsurashima, W. et al., “NAS Switch: A Novel CIFS Server Virtualization, Proceedings,” 20th IEEE/11th NASA Goddard Conference on Mass Storage Systems and Technologies, 2003 (MSST 2003), Apr. 2003.
Kawamoto, D., “Amazon files for Web services patent”, CNET News.com, Jul. 28, 2005, pp. 1-2, (http://news.com).
Kimball, C.E. et al., “Automated Client-Side Integration of Distributed Application Servers,” 13Th LISA Conf., 1999, pp. 275-282 of the Proceedings.
Klayman, J., response filed by Japanese associate to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371.
Klayman, J., Nov. 13, 2008 e-mail to Japanese associate including instructions for response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371.
Klayman, J., Jul. 18, 2007 e-mail to Japanese associate including instructions for response to office action dated Jan. 22, 2007 in corresponding Japanese patent application No. 2002-556371.
Kohl et al., “The Kerberos Network Authentication Service (V5),” RFC 1510, Sep. 1993, 105 pages, http://www.ietf.org/ rfc/rfc1510.txt?number=1510.
Korkuzas, V., Communication pursuant to Article 96(2) EPC dated Sep. 11, 2007 in corresponding European patent application No. 02718824.2-2201, 3 pages.
LaMonica M., “Infravio spiffs up Web services registry idea”, CNET News.com, May 11, 2004, pp. 1-2, (http://www.news.com).
Lelil, S., “Storage Technology News: AutoVirt adds tool to help data migration projects,” Feb. 25, 2011, last accessed Mar. 17, 2011, 3 pages, <http://searchstorage.techtarget.com/news/article/0,289142,sid5_gci1527986,00.html>.
Long et al., “Swift/RAID: A distributed RAID System”, Computing Systems, Summer 1994, 20 pages, vol. 7.
Mac Vittie, L., “Message-Based Load Balancing: Using F5 solutions to address the challenges of scaling Diameter, Radius, and message-oriented protocols”, F5 Technical Brief, Jan. 2010, pp. 1-9, F5 Networks Inc., Seattle, Washington.
Modiano, “Scheduling Algorithms for Message Transmission Over a Satellitebroadcast System,” MILCOM 97 Proceedings Lincoln Lab., MIT, Lexington, MA 2(2):628-34 (Nov. 2-5 1997) (Abstract only).
Nichols, et al., “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers,” (RFC:2474) at http://www.ietf.org/rfc/rfc2474.txt, pp. 1-19 (Dec. 1998).
Noghani et al., “A Novel Approach to Reduce Latency on the Internet: ‘Component-Based Download’,” Proceedings of the Computing, Las Vegas, NV, Jun. 2000, pp. 1-6 on the Internet: Intl Conf. on Internet.
Norton et al., “CIFS Protocol Version CIFS-Spec 0.9,” 2001, 125 pages, Storage Networking Industry Association (SNIA), www.snia.org, last accessed on Mar. 26, 2001.
Novotny et al., “An Online Credential Repository for the Grid: MyProxy,” 2001, pp. 1-8.
Ott et al., “A Mechanism for TCP-Friendly Transport-level Protocol Coordination,” Proceedings of the General Track of the Annual Conference on USENIX Annual Technical Conference (Jun. 2002).
Padmanabhan, “Using Predictive Prefething to Improve World Wide Web Latency,” '96, SIGCOM, all pages (1-15).
Pashalidis et al., “A Taxonomy of Single Sign-On Systems,” 2003, pp. 1-16, Royal Holloway, University of London, Egham Surray, TW20, 0EX, United Kingdom.
Pashalidis et al., “Impostor: A Single Sign-On System for Use from Untrusted Devices,” Global Telecommunications Conference, 2004, GLOBECOM '04, IEEE, Issue Date: Nov. 29-Dec. 3, 2004, 5 pages, Royal Holloway, University of London.
Patterson et al., “A case for redundant arrays of inexpensive disks (RAID)”, Chicago, Illinois, Jun. 1-3, 1998, pp. 109-116, in Proceedings of ACM SIGMOD conference on the Management of Data, Association for Computing Machinery, Inc.
Pearson, P.K., “Fast Hashing of Variable-Length Text Strings,” Comm. of the ACM, Jun. 1990, pp. 677-680, vol. 33, No. 6.
Peterson, M., “Introducing Storage Area Networks,” Feb. 1998, 6 pages, InfoStor, www.infostor.com. last accessed on Dec. 20, 2002.
Preslan et al., “Scalability and Failure Recovery in a Linux Cluster File System,” in Proceedings of the 4th Annual Linux Showcase & Conference, Atlanta, Georgia, Oct. 10-14, 2000, pp. 169-180 of the Proceedings, www.usenix.org/publications/library/proceedings/als2000/full_papers/preslan/presl, last accessed on Dec. 20, 2002.
Raghavan B., et al., “Cloud Control with Distributed Rate Limiting”, SIGCOMM'07, Aug. 27-31, 2007, pp. 1-11, Department of Computer Science and Engineering, University of California, San Diego, CA.
Rodriguez et al., “Parallel-access for mirror sites in the Internet,” InfoCom 2000. Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies. Proceedings. IEEE Tel Aviv, Israel Mar. 26-30, 2000, Piscataway, NJ, USA, IEEE, US, Mar. 26, 2000 (Mar. 26, 2000), pp. 864-873, XP010376176 ISBN: 0-7803-5880-5 p. 867, col. 2, last paragraph—p. 868, col. 1, paragraph 1.
Rosen, et al., “MPLS Label Stack Encoding,” (RFC: 3032) at http://www.ietf.org/rfc/rfc3032.txt, pp. 1-22 (Jan. 2001).
RSYNC, “Welcome to the RSYNC Web Pages,” Retrieved from the Internet URL: http://samba.anu.edu.ut.rsync/. (Retrieved on Dec. 18, 2009), 5 pages.
Savage, et al., “AFRAID—A Frequently Redundant Array of Independent Disks,” Jan. 22-26, 1996, pp. 1-13, USENIX Technical Conference, San Diego, California.
Schilit B., “Bootstrapping Location-Enhanced Web Services”, University of Washington, Dec. 4, 2003, (http://www.cs.washington.edu/news/colloq.info.html).
Seeley R., “Can Infravio technology revive UDDI?”, ADTmag.comAccessed Sep. 30, 2004, (http://www.adtmag.com).
Shohoud, Y., “Building XML Web Services with VB .NET and VB 6”, Addison Wesley, Sep. 17, 2002, pp. 1-14.
Sleeper B., “The Evolution of UDDI”, UDDI.org White Paper, The Stencil Group, Inc., Jul. 19, 2002, pp. 1-15, San Francisco, California.
Sleeper B., “Why UDDI Will Succeed, Quietly: Two Factors Push Web Services Forward”, The Stencil Group, Inc., Apr. 2001, pp. 1-7, San Francisco, California.
Snoeren A., et al., “Managing Cloud Resources:Distributed Rate Limited”, Building and Programming the Cloud Workshop, Jan. 13, 2010, pp. 1-10, UCSDCSE Computer Science and Engineering.
Soltis et al., “The Design and Performance of a Shared Disk File System for IRIX,” Mar. 23-26, 1998, pp. 1-17, Sixth NASA Goddard Space Flight Center Conference on Mass Storage and Technologies in cooperation with the Fifteenth IEEE Symposium on Mass Storage Systems, University of Minnesota.
Soltis et al., “The Global File System,” Sep. 17-19, 1996, 24 pages, in Proceedings of the Fifth NASA Goddard Space Flight Center Conference on Mass Storage Systems and Technologies, College Park, Maryland.
Sommers F., “Whats New in UDDI 3.0—Part 1”, Web Services Papers, Jan. 27, 2003, pp. 1-4, (http://www.webservices.org/index.php/article/articleprint/871/-1/24/).
Sommers F., “Whats New in UDDI 3.0—Part 2”, Web Services Papers, Mar. 2, 2003, pp. 1-7, (http://www.web.archive.org/web/20040620131006/).
Sommers F., “Whats New in UDDI 3.0—Part 3”, Web Services Papers, Sep. 2, 2003, pp. 1-4, (http://www.webservices.org/index.php/article/articleprint/894/-1/24/).
Sorenson, K.M., “Installation and Administration: Kimberlite Cluster Version 1.1.0, Rev. Dec. 2000,” 137 pages, Mission Critical Linux, http://oss.missioncriticallinux.com/kimberlite/kimberlite.pdf.
Stakutis, C., “Benefits of SAN-based file system sharing,” Jul. 2000, pp. 1-4, InfoStor, www.infostor.com, last accessed on Dec. 30, 2002, Penn Well Corporation.
Thekkath et al., “Frangipani: A Scalable Distributed File System,” in Proceedings of the 16th ACM Symposium on Operating Systems Principles, Oct. 1997, pp. 1-14, Association for Computing Machinery, Inc.
Traffix Systems, “Diameter Routing Agent (DRA)”, Accessed Apr. 8, 2013, pp. 2-5, (http://www traffixsystems comsolutionsdiameter-routing-agent-DRA).
Traffix Systems, “Product Brochure, Traffix Signaling Deliver Controller™ (SDC)”, Mar. 2011, pp. 1-11, F5 Networks Inc.
Traffix Systems, “Signaling Deliver Controller™: Control Your 4G Network”, Data Sheet, Mar. 2011, pp. 1-6, F5 Networks Inc.
Traffix Systems, “Signaling Delivery Controller (SDC)”, Jul. 1, 2012, pp. 2-5, (http://www traffixsystems comsolutionsSDC).
Tulloch, Mitch, “Microsoft Encyclopedia of Security,” 2003, pp. 218, 300-301, Microsoft Press, Redmond, Washington.
UDDI “UDDI Version 3.0.1”, UDDI Spec Technical Committee Specification, Oct. 14, 2003, pp. 1-383, uddi.org, (http://www.uddi.org/).
UDDI, “UDDI Overview”, Sep. 6, 2000, pp. 1-21, uddi.org, (http://www.uddi.org/).
UDDI, “UDDI Technical White Paper,” Sep. 6, 2000, pp. 1-12, uddi-org, (http://www.uddi.org/).
Uesugi, H., Nov. 26, 2008 amendment filed by Japanese associate in response to office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371, 5 pages.
Uesugi, H., English translation of office action dated May 26, 2008 in corresponding Japanese patent application No. 2002-556371, 2 pages.
Wang, “Priority and Realtime Data Transfer Over the Best-effort Internet,” University of Massachusetts Amherst Dissertation (2005) (Abstract only).
Wikipedia, “Diameter (protocol)”, pp. 1-11, last accessed Oct. 27, 2010, (http://en.wikipedia.org/wiki/Diameter_(protocol)).
Wilkes, J., et al., “The HP AutoRAID Hierarchical Storage System,” Feb. 1996, 29 pages, vol. 14, No. 1, ACM Transactions on Computer Systems.
Woo, “A Modular Approach to Packet Classification: Algorithms and Results,” Nineteenth Annual Conference of the IEEE Computer and Communications Societies 3(3)1213-22 (Mar. 26-30, 2000).
Zayas, E., “AFS-3 Programmer's Reference: Architectural Overview,” Sep. 2, 1991, 37 pages, Version 1.0 (doc. No. FS-00-D160) Transarc Corporation.
F5 Networks Inc., “BIG-IP® Local Traffic Manager™: Implementations”, Manual, May 25, 2016, pp. 1-284, vol. 12.0, F5 Networks, Inc., Retrieved from the Internet:<https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-implementations-12-0-0.html>.
F5 Networks Inc., “F5 BIG-IP TMOS: Operations Guide”, Manual, Mar. 5, 2016, pp. 1-236, F5 Networks, Inc., Retrieved from the Internet:<https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/f5-tmos-operations-guide.html>.
F5 Networks Inc., “BIG-IP® Local Traffic Management: Basics”, Manual, Oct. 20, 2015, pp. 1-68, vol. 12.0, F5 Networks, Inc., Retrieved from the Internet:<https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-basics-12-0-0.html>.
F5 Networks Inc., “BIG-IP LTM and TMOS 12.0.0”, Release Notes, Oct. 6, 2016, pp. 1-110, vol. 12.0, F5 Networks, Inc., Retrieved from the Internet:<https://support.f5.com/kb/en-us/products/big-ip_ltm/releasenotes/product/relnote-ltm-12-0-0.html>.
F5 Networks Inc., “BIG-IP® Analytics: Implementations”, Manual, Oct. 27, 2015, pp. 1-50, vol. 12.0, F5 Networks, Inc., Retrieved from the Internet:<https://support.f5.com/kb/en-us/products/big-ip_ltm/manuals/product/ltm-basics-12-0-0.html>.
Provisional Applications (1)
Number Date Country
62413488 Oct 2016 US