Methods for traffic rate control and devices thereof

Information

  • Patent Grant
  • 10187317
  • Patent Number
    10,187,317
  • Date Filed
    Monday, November 17, 2014
    10 years ago
  • Date Issued
    Tuesday, January 22, 2019
    6 years ago
Abstract
A method, non-transitory computer readable medium, and traffic management computing device that allocates a subset of tokens to active subscribers based on an estimated number of subscribers that will be active in a next sampling period. A request to transmit a first packet is received from one of the active subscribers. A determination is made when a current time is prior to an expiration of the allocated subset of the tokens. Another determination is made when a length of the first packet is less than a size corresponding to an available portion of the allocated subset of the tokens when the current time is determined to be prior to the expiration of the allocated subset of the tokens. The first packet is transmitted when the length of the first packet is determined to be less than a size corresponding to an available portion of the allocated subset of the tokens.
Description
FIELD

This technology generally relates to methods and devices for controlling the rate of network traffic and, more specifically, to controlling or shaping the rate of network traffic to reduce overhead and increase scalability.


BACKGROUND

Transmission Control Protocol (TCP) is used to facilitate communication between applications of a transmitting network device and a receiving network device using the Internet Protocol (IP). In particular, when a transmitting network device transmits data across a network (e.g., the Internet), application layer software on the transmitting network device issues a request using the TCP layer. The TCP layer handles routing and SEND/ACK details to ensure delivery of the data to the receiving network device.


Network congestion, traffic load balancing, or unpredictable network behavior can cause data packets to be lost, duplicated, or delivered out of order between transmitting and receiving network devices. The TCP layer detects these problems, requests retransmission of lost packets, rearranges out-of-order packets, and/or minimizes network congestion to mitigate anomalous activity. Once the receiving network device has reassembled data packets sent from the transmitting network device, it passes the data packets to the application running on the receiving network device.


When data packets are transmitted between transmitting and receiving network devices, rate or traffic shaping is used by a rate shaper of the transmitting network device to control the rate of flow at which data is transmitted. Typically, the receiving network device continually informs the transmitting network device as to how much data it can receive. When a buffer of the receiving network device fills with data, a subsequent acknowledgment sent to the transmitting network device includes a notification to suspend or stop sending data until the receiving network device is able to process the previously received data packets.


Network performance is generally maintained by using Quality of Service (QoS) functionality performed by the rate shaper, including as a QoS queue. Typically, a transmitting network device packetizes data in accordance with the TCP and sends the data packets to the QoS queue associated with a receiving network device (also referred to as a subscriber). The QoS queue stores the packetized packets and buffers them before transmitting them to control the traffic flow based on predetermined handling parameters. However, this process generally requires performing an enqueue and a dequeue of every packet, which can introduce significant overhead to the process. Additionally, QoS queues, as well as associated token buckets and timers, are not efficiently scalable, and therefore require significant resources to manage a large number of subscribers, which is undesirable.


SUMMARY

A method for traffic rate control, the method includes allocating, by a traffic management computing device, a proportional subset of an amount of tokens to each of one or more currently active subscribers of a plurality of subscribers. The proportional subset of the amount of tokens is allocated based on an estimated number of the plurality of subscribers that will be active in a next sampling period and is based on an established bandwidth profile and a token recharge rate. A request to transmit a first packet is received, by the traffic management computing device, from one of the currently active subscribers. A determination is made, by the traffic management computing device, when a current time is prior to an expiration of the proportional subset of the amount of tokens allocated to the one currently active subscriber. Next, a determination is made, by the traffic management computing device, when a length of the first packet is less than a size corresponding to an available portion of the proportional subset of the amount of tokens allocated to the one currently active subscriber, when the current time is determined to be prior to the expiration of the proportional subset of the amount of tokens allocated to the one currently active subscriber. The first packet is transmitted, by the traffic management computing device, when the length of the first packet is determined to be less than a size corresponding to an available portion of the proportional subset of the amount of tokens allocated to the one currently active subscriber.


A traffic management computing device includes configurable hardware logic configured to be capable of implementing, or a processor, and a memory coupled to the processor, which is configured to be capable of executing programmed instructions comprising and stored in the memory to allocate a proportional subset of an amount of tokens to each of one or more currently active subscribers of a plurality of subscribers. The proportional subset of the amount of tokens is allocated based on an estimated number of the plurality of subscribers that will be active in a next sampling period and is based on an established bandwidth profile and a token recharge rate. A request to transmit a first packet is received from one of the currently active subscribers. A determination is made when a current time is prior to an expiration of the proportional subset of the amount of tokens allocated to the one currently active subscriber. Next, a determination is made when a length of the first packet is less than a size corresponding to an available portion of the proportional subset of the amount of tokens allocated to the one currently active subscriber, when the current time is determined to be prior to the expiration of the proportional subset of the amount of tokens allocated to the one currently active subscriber. The first packet is transmitted when the length of the first packet is determined to be less than a size corresponding to an available portion of the proportional subset of the amount of tokens allocated to the one currently active subscriber.


A non-transitory computer readable medium having stored thereon instructions for traffic rate control comprising executable code which when executed by a processor, causes the processor to perform steps including allocating a proportional subset of an amount of tokens to each of one or more currently active subscribers of a plurality of subscribers. The proportional subset of the amount of tokens is allocated based on an estimated number of the plurality of subscribers that will be active in a next sampling period and is based on an established bandwidth profile and a token recharge rate. A request to transmit a first packet is received from one of the currently active subscribers. A determination is made when a current time is prior to an expiration of the proportional subset of the amount of tokens allocated to the one currently active subscriber. Next, a determination is made when a length of the first packet is less than a size corresponding to an available portion of the proportional subset of the amount of tokens allocated to the one currently active subscriber, when the current time is determined to be prior to the expiration of the proportional subset of the amount of tokens allocated to the one currently active subscriber. The first packet is transmitted when the length of the first packet is determined to be less than a size corresponding to an available portion of the proportional subset of the amount of tokens allocated to the one currently active subscriber.


This technology provides a number of advantages including more efficient and effective methods, non-transitory computer readable media, and devices for controlling the rate of network traffic. With this technology, all packets are not queued in order to manage the rate at which packets are transmitted. Accordingly, enqueue and dequeue methods are not performed for every packet, thereby reducing overhead. Additionally, reduced throughput due to trail dropping of packets associated with TCP connections is mitigated at least in part by implementing an early drop policy. Moreover, packets that are unable to be transmitted immediately are advantageously queued and resubmitted based on expiration of a flow timer, thereby reducing the amount of time required to transmit a packet that may have otherwise been dropped.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a network environment with an exemplary traffic management computing device;



FIG. 2 is a block diagram of the exemplary traffic management computing device illustrated in FIG. 1;



FIG. 3 is a flowchart of an exemplary method for allocating bandwidth and processing requests to transmit packets received from a subscriber;



FIG. 4 is a flowchart of an exemplary method for sweeping subscriber contexts to maintain active and inactive states and an active subscriber count; and



FIG. 5 is a flowchart of an exemplary method for processing dropped packets.





DETAILED DESCRIPTION

Referring to FIG. 1, a block diagram of an exemplary network environment 10 including an exemplary traffic management computing device 12 is illustrated. In this example, the traffic management computing device 12 is coupled to a plurality of client computing devices 14(1)-14(n) through a local area network (LAN) 16 and a wide area network (WAN) 18 and a plurality of server computing devices 16(1)-16(n) through another LAN 20, although the traffic management computing device 12, client computing devices 14(1)-14(n), and server computing devices 16(1)-16(n) may be coupled together via other topologies. The network environment 10 may include other network devices such as one or more routers and/or switches, for example. Although, the traffic management computing device 12 implements the traffic rate control described and illustrated herein in this example, any other type of device running applications configured to handle HTTP communications by transmitting data using TCP connections, for example, can also be used. This technology provides a number of advantages including methods, non-transitory computer readable medium, and devices that facilitate relatively scalable bandwidth control for a large number of subscribers without using QoS queues and other resources dedicated to each subscriber.


Referring to FIGS. 1-2, the traffic management computing device 12 may perform any number of functions in addition to implementing traffic rate control, such as optionally optimizing, securing, and/or load balancing the network traffic exchanged between the client computing devices 14(1)-14(n) and the server computing devices 16(1)-16(n), for example. The traffic management computing device 12 includes a processor 22, a memory 24, optional configurable hardware logic 26, and a communication interface which are coupled together by a bus 32 or other communication link, although the traffic management computing device 12 may include other types and numbers of elements in other configurations.


The processor 22 of the traffic management computing device 12 may execute programmed instructions stored in the memory 24 of the traffic management computing device 12 for the any number of the functions identified above and/or described herein for controlling traffic rate and, optionally, managing network traffic and/or optimizing service of resource requests, for example. The processor 22 of the traffic management computing device 12 may comprise one or more central processing units and/or general purpose processors with one or more processing cores, for example.


The memory 24 of the traffic management computing device 12 stores these programmed instructions for one or more aspects of the present technology as described and illustrated herein, although some or all of the programmed instructions could be stored and executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM), hard disk drives, solid state drives, or other computer readable medium which is read from and written to by a magnetic, optical, or other reading and writing system that is coupled to the processor, can be used for the memory 24.


In this example, the memory 24 further includes a token allocation table 32 and a subscriber context table 34. The token allocation table 32 includes information regarding allocation and available amount of tokens for each subscriber which corresponds to the allocated and available amount of bandwidth for the subscribers, as described and illustrated in more detail later. The subscriber context table 34 includes information regarding a state of each subscriber (e.g., active or inactive) and a transmission count for the current and/or any number of previous sampling periods, for example, as described and illustrated in more detail later.


The optional configurable hardware logic 26 of the traffic management computing device 12 may comprise specialized hardware configured to implement one or more steps of this technology as illustrated and described with reference to the examples herein. By way of example only, the configurable hardware logic 26 may comprise one or more of field programmable gate arrays (FPGAs), field programmable logic devices (FPLDs), application specific integrated circuits (ASICs) and/or programmable logic units (PLUs). In this example, the configurable hardware logic 26 includes a bandwidth controller 36 configured to implement one or more steps of this technology including processing packet transmission requests as described and illustrated in more detail later.


The communication interface 28 operatively couples and communicates between the traffic management computing device 12, the client computing devices 14(1)-14(n), and server computing devices 16(1)-16(n), which are all coupled together by the LANs 16 and 20 and WAN 18, although other types and numbers of communication networks or systems with other types and numbers of connections and configurations to other devices and elements. By way of example only, the LANs 16 and 20 and WAN 18 can use TCP/IP over Ethernet and industry-standard protocols, including NFS, CIFS, SOAP, XML, LDAP, and SNMP, although other types and numbers of communication networks, can be used.


The LANs 16 and 20 in this example may employ any suitable interface mechanisms and network communication technologies including, for example, teletraffic in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Ethernet-based Packet Data Networks (PDNs), combinations thereof, and the like. The WAN 18 may comprise any wide area network (e.g., Internet), although any other type of traffic network topology may be used.


Each of the client computing devices 14(1)-14(n) and server computing devices 16(1)-16(n) includes a processor, a memory, and a communication interface, which are coupled together by a bus or other communication link, although other numbers and types of network devices could be used. The client computing devices may run interface applications, such as Web browsers, that may provide an interface to make requests for and receive content associated with applications hosted by the server computing devices 16(1)-16(n) via the LANs 16 and 20 and/or WAN 18.


The server computing devices 16(1)-16(n) may provide content or other network resources in response to requests directed toward the respective applications hosted by the server computing devices 16(1)-16(n) from the client computing devices 14(1)-14(n) via the LANs 16 and 20 and/or the WAN 18 according to the HTTP-based application RFC protocol or the CIFS or NFS protocol, for example. The server computing devices 16(1)-16(n) may be hardware or software or may represent a system with multiple server computing devices 16(1)-16(n) in a server computing device pool, which may include internal or external networks. Various network processing applications, such as CIFS applications, NFS applications, HTTP Web Server applications, and/or FTP applications, may be operating on the server computing devices 16(1)-16(n) and transmitting data (e.g., files or web pages) in response to requests from the client computing devices 14(1)-14(n).


Although the exemplary network environment 10 with the traffic management computing device 12, client computing devices 14(1)-14(n), server computing devices 16(1)-16(n), LANs 16 and 20, and WAN 18 are described and illustrated herein, other types and numbers of systems, devices, components, and elements in other topologies can be used. It is to be understood that the systems of the examples described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the examples are possible, as will be appreciated by those skilled in the relevant art(s).


In addition, two or more computing systems or devices can be substituted for any one of the systems or devices in any example. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the examples. The examples may also be implemented on computer system(s) that extend across any suitable network using any suitable interface mechanisms and traffic technologies, including by way of example only teletraffic in any suitable form (e.g., voice and modem), wireless traffic media, wireless traffic networks, cellular traffic networks, G3 traffic networks, Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof.


The examples may also be embodied as a non-transitory computer readable medium having instructions stored thereon for one or more aspects of the present technology as described and illustrated by way of the examples herein, as described herein, which when executed by a processor, cause the processor to carry out the steps necessary to implement the methods of the examples, as described and illustrated herein.


An exemplary method for traffic rate control will now be described with reference to FIGS. 1-5. Referring more specifically to FIG. 3 a method for allocating bandwidth and processing requests to transmit packets received from a subscriber is illustrated. In step 300 in this example, the traffic management computing device 12 initializes the bandwidth controller by generating a common token bucket having a plurality of tokens. The number of tokens in, or the depth of, the common token bucket corresponds with an aggregate rate of all subscribers associated with a particular group or class and a token recharge rate.


The aggregate rate and token recharge rate can be established by a configuration provided by an administrator of the traffic management computing device 12, for example. The particular class of subscribers can be based on an association of the subscribers with a specific application or a specified network address or location, for example, although any other attributes can be used to identify a particular class of subscribers. Accordingly, in one example, the depth of the common token bucket corresponds with a network capacity for the class of subscribers.


In step 302, the traffic management computing device 12 allocates a plurality of tokens proportionally to active subscribers based on an estimated number of active subscribers for a next recharge cycle. The number of allocated tokens can be mapped to each active subscriber in the token allocation table 32. Each of the tokens corresponds with a size of network traffic (also referred to herein as packet length) that can be transmitted by an associated subscriber. Additionally, each token is valid for a specified duration and has an associated expiration.


The number of active subscribers can be estimated based on a moving average of active subscribers in a plurality of prior recharge cycles or based on a number of subscribers currently active at the time of the allocation, for example, although other methods of estimating the number of active subscribers for a next recharge cycle can also be used. A subscriber can be determined to be active based on a state value in an entry of the subscriber context table 34 corresponding to the subscriber and the number of active subscribers can be maintained in an active subscriber count stored in the memory, as described and illustrated in more detail later.


In step 304, the traffic management computing device 12 receives a request to transmit a packet from a subscriber. The request can be from a TCP stack associated with the subscriber and can be received by the bandwidth controller 36, for example. In this example, the subscriber can correspond to one of the client computing devices 14(1)-14(n), although the subscriber can be any other network device communicating with the traffic management computing device 12. Accordingly, the packet can be a portion of an application layer message, such as an HTTP request for content stored by one of the server computing devices 16(1)-16(n), for example, although any other type of packet with other content or information can also be used.


In step 306, the traffic management computing device 12 determines whether the subscriber that originated the request is inactive. Accordingly, the traffic management computing device 12 can query the subscriber context table 34 based on identifying information included in the packet, for example, to retrieve a state value. The state value can indicate an active or inactive state for a subscriber, as described and illustrated in more detail later with reference to FIG. 4. If the traffic management computing device 12 determines that the subscriber is inactive, then the Yes branch is taken to a step 308.


In step 308, the traffic management computing device 12 allocates tokens to the subscriber by inserting or modifying an entry of the token allocation table 32, modifies the context for the subscriber in the subscriber context table 34 to indicate an active state, and increments an active subscriber count stored in the memory 24. In this example, the number of tokens allocated to the subscriber is equivalent to the proportional number of tokens allocated to the active subscribers in step 302, although another amount or number of tokens can also be allocated to the subscriber.


Subsequent to allocating the tokens, modifying the corresponding entry of the subscriber context table 34, and incrementing the active subscriber count in step 308, or if the traffic management computing device 12 determines that the subscriber is not active and the No branch is taken from step 306, the traffic management computing device 12 proceeds to step 310. In step 310, the traffic management computing device 12 determines whether a current time is less than an expiration time for the tokens identified in the corresponding entry of the token allocation table 32 as available for use by the subscriber to transmit the packet. If the traffic management computing device 12 determines that the token duration or interval has been exceeded and the tokens are expired, then the No branch is taken back to step 300 and the traffic management computing device 12 again fills the common token bucket, as described and illustrated earlier. Optionally, the packet can be dropped or queued and resubmitted, such as described and illustrated in more detail later with reference to FIG. 5, for example, subsequent to the common token bucket being refilled.


Referring back to step 310, if the traffic management computing device 12 determines that the token duration or interval has not been exceeded, and that the tokens are not expired, then the Yes branch is taken to step 312. In step 312, the traffic management computing device 12 determines whether the length of the packet is less than a size corresponding to one or more available tokens allocated to the subscriber. The tokens available for the subscriber can be identified by querying the token allocation table 32, for example. Accordingly, if the traffic management computing device 12 determines that the subscriber does have sufficient tokens to transmit the packet in the current recharge cycle, then the Yes branch is taken to step 314.


In step 314, the traffic management computing device 12 implements an early drop policy to determine whether the packet should be dropped even though the subscriber has enough available tokens to transmit the packet. By implementing an early drop policy and dropping packets early in a recharge cycle, the traffic management computing device 12 can mitigate the undesirable effect on throughput introduced by tail dropping at the end of a recharge cycle. Pseudocode for one exemplary early drop policy is illustrated as follows:


1. If (bi(t)<bi-min_th) ADMIT


2. Else {


3. p=rand( );


4. If (p<(ri/fi)) DROP;


5.}


In this example, ri is an input rate of packets received from the subscriber, fi is a fair rate for the subscriber, bi(t) is the number of available tokens for the subscriber at time t, bi is the number of tokens allocated to the subscriber in the recharge cycle, and min_th is a minimum threshold number of the tokens allocated to the subscriber that must be used before any packets are dropped pursuant to this exemplary early drop policy. Accordingly, fi and min_th can be established by an administrator of the traffic management computing device 12, bi(t) can be determined from the token allocation table 32, and bi can be the predetermined number of tokens allocated in steps 302 or 308, for example, although the parameters of the early drop policy can have other values and can be determined in other manners.


Pseudocode for another exemplary early drop policy is illustrated as follows:


1. Start: No early drop.


2. Count number of tail drop packets (X) out of N packets.


3. Set drop_window=N/X such that a packet is dropped once in every drop window.


4. If no tail drop then drop_window is increased by a fixed amount.


5. If packets are subsequently tail dropped, then recalculate drop_window.


6. Repeat the third through fifth steps.


Other early drop policies can also be implemented by the traffic management computing device 12 in step 314. If the traffic management computing device 12 determines that the packet should be dropped based on the implemented early drop policy, then the Yes branch is taken to step 316.


In step 316, the traffic management computing device 12 drops or queues the packet depending on a configuration provided by an administrator of the traffic management computing device 12. For example, if the traffic management computing device 12 is operating as a full proxy and terminating TCP connections from both the client computing devices 14(1)-14(n) and the server computing devices 16(1)-16(n), then the traffic management computing device 12 can be configured to queue the packet for later retransmission.


Conversely, if the traffic management computing device 12 is not operating as a full proxy, then the traffic management computing device 12 can be configured to drop the packet. In another example, the traffic management computing device 12 can be configured to drop all packets that the traffic management computing device 12 determines in step 314 should be dropped based on the implementation of the early drop policy. Other configurations establishing whether a packet should be dropped or queued, or another action should be taken, can also be used.


Referring back to step 314, if the traffic management computing device 12 determines that the packet should not be dropped based on the implemented early drop policy then the No branch is taken to step 318. In step 318, the traffic management computing device 12 transmits the packet using the communication interface 28. In step 318, the traffic management computing device 12 also increments a transmission count for the current recharge cycle for the subscriber. The transmission count can be stored in the subscriber context table 34 and can be used to determine whether the subscriber is currently in an active or inactive state, as described and illustrated in more detail later with reference to FIG. 4.


Additionally, the traffic management computing device 12 debits the available tokens for the subscriber in step 318. The amount of tokens allocated to the subscriber and currently available (not yet used) in the current recharge cycle can be maintained in, and debited from, the token allocation table 34, for example. Subsequent to transmitting the packet, incrementing the transmission count, and debiting the available tokens, or during any of steps 306-318, the traffic management computing device 12 receives another request to transmit a packet from the same or a different subscriber in the third step 304, as described and illustrated earlier.


Referring back to step 312, if the traffic management computing device 12 determines that the subscriber does not have sufficient tokens to transmit the packet in the current recharge cycle, then the No branch is taken to step 320. In step 320, the traffic management computing device 12 implements an oversubscription policy to determine whether the packet can be transmitted. Packets can be outstanding at the end of a recharge cycle when a subscriber runs out of tokens, the network is congested, and/or many subscribers became active and were allocated tokens in the recharge cycle, for example, although packets can be outstanding for other reasons.


Accordingly, an administrator of the traffic management computing device 12 can establish an oversubscription policy, the parameters of which can be stored in the memory 24 for example. The oversubscription policy generally provides for the borrowing of tokens in the next recharge cycle so that outstanding packets can be transmitted in the current recharge cycle. Outstanding packets can optionally be stored in a queue that is processed based on the oversubscription policy at the end of a recharge cycle, although the outstanding packets can also be stored elsewhere. Pseudocode for one exemplary oversubscription policy is illustrated as follows:

  • 1. If ((Li=Len(packet(i)))<bi)
  • 2. //Check if common token bucket (CTB) can transmit the packet at this instance,
  • 3. If (Li > Ba(t))
  • 4. DROP;
  • 5. Else
  • 6. //Borrow from EB if possible. Before borrowing can begin one packet must be sacrifices.
  • 7. If (bi >=0 && Li<debt)
  • 8. If (drop_before_excess_burst==FALSE) DROP
  • 9. Else drop_before_excess_burst=TRUE:
  • 10. Debt=(Li−bi)
  • 11. bi=bi−Li;
  • 12. ADMIT
  • 13. Else
  • 14. debt=0;
  • 15. DROP


    In this example, bi is the aggregate maximum token depth of the common token bucket, Ba(t) is the available number of tokens at time t, and EB is an excessive burst size per subscriber that is predetermined by an administrator of the traffic management computing device 12. Other oversubscription policies can also be used.


Optionally, the oversubscription policy specifies a maximum number of tokens that can be borrowed from the next recharge cycle (e.g., an aggregate amount or an amount per subscriber) and the traffic management computing device 12 can maintain the number of tokens utilized to send at least a subset of the outstanding packets in the current recharge cycle. Accordingly, if the traffic management computing device 12 determines based on the implementation of the oversubscription policy that the packet cannot be transmitted, then the No branch is taken to step 316 and the packet is dropped or queued, as described and illustrated earlier. Alternatively, if the traffic management computing device 12 determines in step 320, based on the implementation of the oversubscription policy, that the packet can be transmitted, then the Yes branch is taken to step 318 and the packet is transmitted, as described and illustrated earlier.


Referring more specifically to FIG. 4, an exemplary method for sweeping subscriber contexts in the subscriber context table 34 to maintain active and inactive states and an active subscriber count is illustrated. In this example, the sweeping of the subscriber contexts described and illustrated in FIG. 4 can be performed, such as by the bandwidth controller 36 for example, in parallel with the method of allocating bandwidth and processing requests to transmit packets described and illustrated earlier with reference to FIG. 3.


Accordingly, in step 400 in this example, the traffic management computing device 12 retrieves one of a plurality of subscriber contexts, which are stored in the subscriber context table 34 in this example. The subscriber contexts in this example include a unique indication of the subscriber, a transmission count for at least a current sampling period and a last sampling period, a state which can indicate an active or inactive state for the subscriber, and a last visit time. Other information can also be stored in the subscriber contexts.


In step 404, the traffic management computing device 12 determines whether the difference between a current time and the last visit time for the one subscriber context is less than the size of the sampling period. Accordingly, the traffic management computing device 12 essentially determines whether the subscriber context has already been visited/retrieved during the current sampling period. Optionally, the sampling period can correspond with the token recharge cycle configured by an administrator of the traffic management computing device 12, as described and illustrated earlier with reference to the step 300 of FIG. 3, although a different sampling period can also be used. Accordingly, if the traffic management computing device 12 determines that the difference between a current time and the last visit time for the one subscriber context is not less than the size of the sampling period, then the No branch is taken to step 404.


In step 404, the traffic management computing device 12 determines whether the transmission count for the last sampling period is equivalent to zero. If the transmission count for the last sampling period is equivalent to zero, then the subscriber associated with the one of the subscriber contexts was not active or did not transmit any packets during the last sampling period. If the subscriber associated with the one of the subscriber contexts did transmit one or more packets in the last sampling period, then the transmission count would have been incremented as described and illustrated earlier with reference to step 318 of FIG. 3 and would not be equivalent to zero.


Optionally, in other examples, the traffic management computing device 12 can determine whether the transmission count for a specified number of prior sampling periods, and/or the current sampling period, is also equivalent to zero depending on how many sampling periods of inactivity an administrator of the traffic management computing device 12 would like to require prior to changing the state of the subscriber context to indicate an inactive state. Accordingly, if the traffic management computing device 12 determines that the transmission count for the last sampling period is equivalent to zero, then the Yes branch is taken to step 406.


In step 406, the traffic management computing device 12 determines whether the state in the one subscriber context indicates an active state. If the traffic management computing device 12 determines that the state in the one subscriber context indicates an active state, then the Yes branch is taken to step 408. In step 408, the traffic management computing device 12 modifies the one subscriber context to indicate an inactive state and decrements an active subscriber count. The active subscriber count can be maintained in the memory 24, as described and illustrated earlier with reference to step 308 of FIG. 3, and can be used to generated the common token bucket and/or to allocate the plurality of tokens, as described and illustrated earlier with reference to the steps 300-302 of FIG. 3.


Subsequent to modifying the subscriber context, or if the traffic management computing device 12 determines that the difference between the current time and the last visit time is less than the size of the sampling period in step 402 and the Yes branch is taken, the transmission count for the last sampling period is not equivalent to zero in step 404 and the No branch is taken, or the state in the one subscriber context does not indicate an active state in step 406 and the No branch is taken, the traffic management computing device 12 proceeds to step 410. In step 410, the traffic management computing device 12 sets the last visit time of the one subscriber context to the current time. Subsequent to setting the last visit time to the current time, the traffic management computing device 12 proceeds back to step 400 and retrieves another one of the subscriber contexts from the subscriber context table 34, as described and illustrated earlier.


Referring more specifically to FIG. 5, an exemplary method for processing dropped packets, such as those packets determined to be dropped or queued in step 316 for example, is illustrated. In this example, the traffic management computing device 12 is configured to operate as a full proxy terminating TCP connections between the client computing devices 14(1)-14(n) and server computing devices 16(1)-16(n), although other operational configurations for the traffic management computing device 12 can also be used. In step 500 in this example, the traffic management computing device 12 creates a data packet for transmission. The packet can be created by a TCP stack associated with a subscriber, for example.


In step 502, the traffic management computing device 12(1)-12(n) submits the packet to the IP layer for processing. The packet can be submitted to the IP layer by the TCP stack, for example. In step 504, the traffic management computing device 12 receives the packet, such as at the bandwidth controller 36 and from the TCP stack, for example. Accordingly, step 504 in FIG. 5 corresponds with step 304 in FIG. 3 in this example.


In step 506, the traffic management computing device 12 determines whether the bandwidth controller 36 decided to drop the packet. The packet can be dropped as described and illustrated earlier with reference to step 316 of FIG. 3, such as when a length of the packet is less than a size corresponding to an available amount of tokens allocated to the subscriber, or based on a result of an implemented early drop policy, for example, although the packet can also be dropped for other reasons. If the traffic management computing device 12 determines that the packet was not determined to be dropped by the bandwidth controller 36, then the No branch is taken to step 508. In step 508, the packet is transmitted as described and illustrated earlier with reference to step 318 of FIG. 3.


Referring back to step 506, if the traffic management computing device 12 determines that the packet was determined to be dropped by the bandwidth controller 36, then the Yes branch is taken to step 510. In step 510, the traffic management computing device 12 marks the packet as not transmitted and, optionally, sends the packet back to the TCP stack or places the packet in a queue associated with the TCP stack, for example. In one example, the packet can be marked based on an indication associated with the packet stored in a portion of the memory 24 utilized by the TCP stack to store and retrieve packets, for example. In another example, the bandwidth controller 36 sends a return code to the TCP stack from which the packet was received to indicate that the packet has been marked. Other manners of marking the packet can also be used.


In step 512, the traffic management computing device 12 sets a TCP flow timer in the memory 24 to schedule resubmission of the packet by the TCP stack to the bandwidth controller 36. Optionally, the TCP flow timer can be set by the bandwidth controller 36 based on the token recharge rate of the bandwidth controller 36 so that resubmission of the packet occurs in a next recharge cycle during which the subscriber associated with the TCP stack may have available tokens that can be used to transmit the packet.


Accordingly, in step 514, the traffic management computing device 12 determines whether the flow time has expired. If the traffic management computing device 12 determines that the flow timer has not expired, then the No branch is taken back to step 514, and the traffic management computing device 12 effectively waits for the flow timer to expire. However, if the traffic management computing device 12 determines in step 514 that the flow timer has expired, then the Yes branch is taken back to step 504.


Accordingly, when the flow timer has expired, the traffic management computing device 12 notifies the TCP stack and the bandwidth controller 36 receives a second request to transmit the packet. In this example, the packet can then be processed as described and illustrated earlier with reference to the steps 304-320 of FIG. 3 and the traffic management computing device can again determine whether the packet was determined to be dropped by the bandwidth controller 36 in step 506 of FIG. 5


By this technology, bandwidth control, traffic rate control, and/or traffic rate shaping policies can be implemented with reduced overhead and increased scalability since each packet does not have to be enqueued and dequeued, and a queue does not have to be maintained for each subscriber. Instead, requests to transmit packets of a specified size can be serviced with tokens and an allocated quota per subscriber per recharge cycle can be used to limit the bandwidth utilized in a time period. Advantageously, an early drop policy can be implemented to mitigate the reduced throughput resulting from tail dropping packets for TCP connections. Additionally, packets determined to be dropped can be more effectively processed and resubmitted based on expiration of a flow timer, thereby reducing the time required to transmit a packet that would otherwise have been dropped to a destination.


Having thus described the basic concept of the invention, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the invention. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.

Claims
  • 1. A method for traffic rate control implemented by a network traffic management system comprising one or more network traffic management devices, server devices, or client devices, the method comprising: allocating a proportional subset of an amount of tokens to a plurality of subscribers based on an estimated number of the plurality of subscribers that will be active in a next sampling period;receiving a request to transmit a first packet from a subscriber of the plurality of subscribers;querying a database to retrieve a state value associated with the subscriber;determining when the subscriber is in an active state based on analyzing the retrieved state value;determining an expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens, when the determination indicates that the subscriber is in the active state;comparing the determined expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens with a current time;determining the expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens has not expired when the comparison indicates that the current time is less than an expiration time;determining when the first packet is to be transmitted based on one or more token policies upon determining that the expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens has not expired;queuing the first packet in a first queue when the determining indicates that the first packet cannot transmitted based on the one or more token policies;retrieving one of a plurality of subscriber contexts associated with the subscriber of the plurality of subscribers;determining when a difference between a current time and a last visit time of the one subscriber context is less than a size of a sampling period, a transmission count of the one subscriber context is equal to zero for a last sampling period, and a state of the one subscriber context indicates an active state;setting a last visit time of the one subscriber context to the current time; andrepeating the retrieving, determining, and setting for each other of the plurality of subscriber contexts.
  • 2. The method of claim 1, further comprising: transmitting the first packet, without queueing the first packet, when the determining indicates that the first packet is to be transmitted based on the one or more token policies, wherein the one or more token policies comprises borrowing tokens from the next sampling period.
  • 3. The method of claim 1, further comprising: determining when a flow time associated with the first packet has expired;scheduling a resubmission of the queued first packet when the determination indicates that the flow time has expired;receiving a request to transmit a second packet from one currently inactive subscriber of the plurality of subscribers;allocating a plurality of tokens equivalent to the proportional subset of the amount of tokens to the one currently inactive subscriber;transmitting the second packet;incrementing a transmission count for a current sampling period for the one currently inactive subscriber;debiting the proportional subset of tokens allocated to the one currently inactive subscriber based on a length of the second packet; andmodifying a subscriber context corresponding to the one currently inactive subscriber to indicate an active state.
  • 4. The method of claim 1, further comprising: determining when there are one or more outstanding packets;implementing the one or more token policies to determine when one or more of the outstanding packets are able to be transmitted when the determining indicates that there are the one or more outstanding packets; andtransmitting the one or more of the outstanding packets and repeating the allocating, wherein the amount of tokens is reduced based on a size of the one or more of the outstanding packets, when the determining indicates that the one or more of the outstanding packets are able to be transmitted.
  • 5. The method of claim 1, further comprising: implementing an early drop policy to determine when the first packet is able to be transmitted when a length of the first packet is determined to be less than a size corresponding to the allocated proportional subset of the amount of tokens allocated to the one of the subscriber in the active state;only transmitting the first packet when the determining indicates that the first packet is able to be transmitted; andmarking the first packet as not transmitted, setting a TCP flow timer, determining when the TCP flow timer has expired, and receiving another request to transmit the first packet when the determining indicates that the TCP flow timer has expired, when the determining indicates that the first packet is able to be transmitted based on the implemented early drop policy or when the determining indicates that the length of the first packet is not less than a size corresponding to the allocated proportional subset of the amount of tokens allocated to the one of the subscriber in the active state.
  • 6. A traffic management computing device, comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to: allocate a proportional subset of an amount of tokens to a plurality of subscribers based on an estimated number of the plurality of subscribers that will be active in a next sampling period;receive a request to transmit a first packet from a subscriber of the plurality of subscribers;query a database to retrieve a state value associated with the subscriber;determine when the subscriber is in an active state based on analyzing the retrieved state value;determine an expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens, when the determination indicates that the subscriber is in the active state;compare the determined expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens with a current time;determine the expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens has not expired, when the comparison indicates that the current time is less than an expiration time;determine when the first packet is to be transmitted based on one or more token policies upon determining that the expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens has not expired;queue the first packet in a first queue when the determining indicates that the first packet cannot be transmitted based on the one or more token policies;retrieve one of a plurality of subscriber contexts associated with the subscriber of the plurality of subscribers;determine when a difference between a current time and a last visit time of the one subscriber context is less than a size of a sampling period, a transmission count of the one subscriber context is equal to zero for a last sampling period, and a state of the one subscriber context indicates an active state;set a last visit time of the one subscriber context to the current time; andrepeat the retrieving, determining, and setting for each other of the plurality of subscriber contexts.
  • 7. The traffic management computing device of claim 6, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: transmit the first packet, without queueing the first packet, when the determining indicates that the first packet is to be transmitted based on the one or more token policies, wherein the one or more token policies comprises borrowing tokens from the next sampling period.
  • 8. The traffic management computing device of claim 6, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: determine when a flow time associated with the first packet has expired;schedule a resubmission of the queued first packet when the determination indicates that the flow time has expired;receive a request to transmit a second packet from one currently inactive subscriber of the plurality of subscribers;allocate a plurality of tokens equivalent to the proportional subset of the amount of tokens to the one currently active subscriber;transmit the second packet;increment a transmission count for a current sampling period for the one currently inactive subscriber;debit the proportional subset of tokens allocated to the one currently inactive subscriber based on a length of the second packet; andmodify a subscriber context corresponding to the one currently inactive subscriber to indicate an active state.
  • 9. The traffic management computing device of claim 6, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: determine when there are one or more outstanding packets;implement the one or more token policies to determine when one or more of the outstanding packets are able to be transmitted when the determining indicates that there are the one or more outstanding packets; andtransmit the one or more of the outstanding packets and repeat the allocating, wherein the amount of tokens is reduced based on a size of the one or more of the outstanding packets, when the determining indicates that the one or more of the outstanding packets are able to be transmitted.
  • 10. The traffic management computing device of claim 6, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: implement an early drop policy to determine when the first packet is able to be transmitted when a length of the first packet is determined to be less than a size corresponding to the allocated proportional subset of the amount of tokens allocated to the one of the subscriber in the active state;only transmit the first packet when the determining indicates that the first packet is able to be transmitted; andmark the first packet as not transmitted, setting a TCP flow timer, determining when the TCP flow timer has expired, and receiving another request to transmit the first packet when the determining indicates that the TCP flow timer has expired, when the determining indicates that the first packet is able to be transmitted based on the implemented early drop policy or when the determining indicates that the length of the first packet is not less than a size corresponding to the allocated proportional subset of the amount of tokens allocated to the one of the subscriber in the active state.
  • 11. A non-transitory computer readable medium having stored thereon instructions for traffic rate control comprising machine executable code which when executed by one or more processors, causes the one or more processors to: allocate a proportional subset of an amount of tokens to a plurality of subscribers based on an estimated number of the plurality of subscribers that will be active in a next sampling period;receive a request to transmit a first packet from a subscriber of the plurality of subscribers;query a database to retrieve a state value associated with the subscriber;determine when subscriber is in an active state based on analyzing the retrieved state value;determine an expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens, when the determination indicates that the subscriber is in the active state;compare the determined expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens with a current time;determine the expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens has not expired when the comparison indicates that the current time is less than an expiration time;determine when the first packet is to be transmitted based on one or more token policies upon determining that the expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens has not expired;queue the first packet in a first queue when the determining indicates that the first packet cannot-be transmitted based on the one or more token policies;retrieve one of a plurality of subscriber contexts associated with the subscriber of the plurality of subscribers;determine when a difference between a current time and a last visit time of the one subscriber context is less than a size of a sampling period, a transmission count of the one subscriber context is equal to zero for a last sampling period, and a state of the one subscriber context indicates an active state;set a last visit time of the one subscriber context to the current time; andrepeat the retrieving, determining, and setting for each other of the plurality of subscriber contexts.
  • 12. The non-transitory computer readable medium of claim 11, wherein the machine executable code when executed by the one or more processors further causes the one or more processors to: transmit the first packet, without queueing the first packet, when the determining indicates that the first packet is to be transmitted based on the one or more token policies, wherein the one or more token policies comprises borrowing tokens from the next sampling period.
  • 13. The non-transitory computer readable medium of claim 11, wherein the machine executable code when executed by the one or more processors further causes the one or more processors to: determine when a flow time associated with the first packet has expired;schedule a resubmission of the queued first packet when the determination indicates that the flow time has expired;receive a request to transmit a second packet from one currently inactive subscriber of the plurality of subscribers;allocate a plurality of tokens equivalent to the proportional subset of the amount of tokens to the one currently active subscriber;transmit the second packet;increment a transmission count for a current sampling period for the one currently inactive subscriber;debit the proportional subset of tokens allocated to the one currently inactive subscriber based on a length of the second packet; andmodify a subscriber context corresponding to the one currently inactive subscriber to indicate an active state.
  • 14. The non-transitory computer readable medium of claim 11, wherein the machine executable code when executed by the one or more processors further causes the one or more processors to: determine when there are one or more outstanding packets;implement the one or more token policies to determine when one or more of the outstanding packets are able to be transmitted when the determining indicates that there are the one or more outstanding packets; andtransmit the one or more of the outstanding packets and repeat the allocating, wherein the amount of tokens is reduced based on a size of the one or more of the outstanding packets, when the determining indicates that the one or more of the outstanding packets are able to be transmitted.
  • 15. The non-transitory computer readable medium of claim 11, wherein the machine executable code when executed by the one or more processors further causes the one or more processors to: implement an early drop policy to determine when the first packet is able to be transmitted when a length of the first packet is determined to be less than a size corresponding to the allocated proportional subset of the amount of tokens allocated to the one of the subscriber in the active state;only transmit the first packet when the determining indicates that the first packet is able to be transmitted; andmark the first packet as not transmitted, setting a TCP flow timer, determining when the TCP flow timer has expired, and receiving another request to transmit the first packet when the determining indicates that the TCP flow timer has expired, when the determining indicates that the first packet is able to be transmitted based on the implemented early drop policy or when the determining indicates that the length of the first packet is not less than a size corresponding to the allocated proportional subset of the amount of tokens allocated to the one of the subscriber in the active state.
  • 16. A network traffic management system, comprising one or more network traffic management devices, server devices, or client devices, the network traffic management system comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to: allocate a proportional subset of an amount of tokens to a plurality of subscribers based on an estimated number of the plurality of subscribers that will be active in a next sampling period;receive a request to transmit a first packet from a subscriber of the plurality of subscribers;query a database to retrieve a state value associated with the subscriber;determine when the subscriber is in an active state based on analyzing the retrieved state value;determine an expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens, when the determination indicates that the subscriber is in the active state;compare the determined expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens with a current time;determine the expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens has not expired when the comparison indicates that the current time is less than an expiration time;determine when the first packet is to be transmitted based on one or more token policies upon determining that the expiration time associated with each of the tokens in the allocated proportional subset of the amount of tokens has not expired;queue the first packet in a first queue when the determining indicates that the first packet cannot be transmitted based on the one or more token policies;retrieve one of a plurality of subscriber contexts associated with the subscriber of the plurality of subscribers;determine when a difference between a current time and a last visit time of the one subscriber context is less than a size of a sampling period, a transmission count of the one subscriber context is equal to zero for a last sampling period, and a state of the one subscriber context indicates an active state;set a last visit time of the one subscriber context to the current time; andrepeat the retrieving, determining, and setting for each other of the plurality of subscriber contexts.
  • 17. The network traffic management system of claim 16, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: transmit the first packet, without queueing the first packet, when the determining indicates that the first packet is to be transmitted based on the one or more token policies, wherein the one or more token policies comprises borrowing tokens from the next sampling period.
  • 18. The network traffic management system of claim 16, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: determine when a flow time associated with the first packet has expired; andschedule a resubmission of the queued first packet when the determination indicates that the flow time has expired;receive a request to transmit a second packet from one currently inactive subscriber of the plurality of subscribers;allocate a plurality of tokens equivalent to the proportional subset of the amount of tokens to the one currently active subscriber;transmit the second packet;increment a transmission count for a current sampling period for the one currently inactive subscriber;debit the proportional subset of tokens allocated to the one currently inactive subscriber based on a length of the second packet; andmodify a subscriber context corresponding to the one currently inactive subscriber to indicate an active state.
  • 19. The network traffic management system of claim 16, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: determine when there are one or more outstanding packets;implement the one or more token policies to determine when one or more of the outstanding packets are able to be transmitted when the determining indicates that there are the one or more outstanding packets; andtransmit the one or more of the outstanding packets and repeat the allocating, wherein the amount of tokens is reduced based on a size of the one or more of the outstanding packets, when the determining indicates that the one or more of the outstanding packets are able to be transmitted.
  • 20. The network traffic management system of claim 16, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: implement an early drop policy to determine when the first packet is able to be transmitted when a length of the first packet is determined to be less than a size corresponding to the allocated proportional subset of the amount of tokens allocated to the one of the subscriber in the active state;only transmit the first packet when the determining indicates that the first packet is able to be transmitted; andmark the first packet as not transmitted, setting a TCP flow timer, determining when the TCP flow timer has expired, and receiving another request to transmit the first packet when the determining indicates that the TCP flow timer has expired, when the determining indicates that the first packet is able to be transmitted based on the implemented early drop policy or when the determining indicates that the length of the first packet is not less than a size corresponding to the allocated proportional subset of the amount of tokens allocated to the one of the subscriber in the active state.
Parent Case Info

This application claims the benefit of U.S. Provisional Patent Application No. 61/905,011, filed on Nov. 15, 2013, which is hereby incorporated by reference in its entirety.

US Referenced Citations (794)
Number Name Date Kind
3950735 Patel Apr 1976 A
4644532 George et al. Feb 1987 A
4897781 Chang et al. Jan 1990 A
4965772 Daniel et al. Oct 1990 A
5023826 Patel Jun 1991 A
5053953 Patel Oct 1991 A
5167024 Smith et al. Nov 1992 A
5282201 Frank et al. Jan 1994 A
5299312 Rocco, Jr. Mar 1994 A
5327529 Fults et al. Jul 1994 A
5367635 Bauer et al. Nov 1994 A
5371852 Attanasio et al. Dec 1994 A
5406502 Haramaty et al. Apr 1995 A
5475857 Dally Dec 1995 A
5517617 Sathaye et al. May 1996 A
5519694 Brewer et al. May 1996 A
5519778 Leighton et al. May 1996 A
5521591 Arora et al. May 1996 A
5528701 Aref Jun 1996 A
5550816 Hardwick et al. Aug 1996 A
5581764 Fitzgerald et al. Dec 1996 A
5596742 Agarwal et al. Jan 1997 A
5606665 Yang et al. Feb 1997 A
5611049 Pitts Mar 1997 A
5623490 Richter et al. Apr 1997 A
5663018 Cummings et al. Sep 1997 A
5752023 Choucri et al. May 1998 A
5761484 Agarwal et al. Jun 1998 A
5768423 Aref et al. Jun 1998 A
5774660 Brendel et al. Jun 1998 A
5790554 Pitcher et al. Aug 1998 A
5793302 Stambler Aug 1998 A
5802052 Venkataraman Sep 1998 A
5812550 Sohn et al. Sep 1998 A
5825772 Dobbins et al. Oct 1998 A
5832283 Chou et al. Nov 1998 A
5862326 Bapat Jan 1999 A
5875296 Shi et al. Feb 1999 A
5892914 Pitts Apr 1999 A
5892932 Kim Apr 1999 A
5905872 DeSimone et al. May 1999 A
5919247 Van Hoff et al. Jul 1999 A
5936939 Des Jardins et al. Aug 1999 A
5941988 Bhagwat et al. Aug 1999 A
5946690 Pitts Aug 1999 A
5949885 Leighton Sep 1999 A
5951694 Choquier et al. Sep 1999 A
5959990 Frantz et al. Sep 1999 A
5974148 Stambler Oct 1999 A
5974460 Maddalozzo, Jr. et al. Oct 1999 A
5983281 Ogle et al. Nov 1999 A
5988847 McLaughlin et al. Nov 1999 A
5991302 Berl et al. Nov 1999 A
5995491 Richter et al. Nov 1999 A
6006260 Barrick, Jr. et al. Dec 1999 A
6006264 Colby et al. Dec 1999 A
6026452 Pitts Feb 2000 A
6026500 Topff et al. Feb 2000 A
6028857 Poor Feb 2000 A
6029175 Chow et al. Feb 2000 A
6041365 Kleinerman Mar 2000 A
6046979 Bauman Apr 2000 A
6047356 Anderson et al. Apr 2000 A
6051169 Brown et al. Apr 2000 A
6067558 Wendt et al. May 2000 A
6078956 Bryant et al. Jun 2000 A
6085234 Pitts et al. Jul 2000 A
6092196 Reiche Jul 2000 A
6104706 Richter et al. Aug 2000 A
6108703 Leighton et al. Aug 2000 A
6111876 Frantz et al. Aug 2000 A
6128279 O'Neil et al. Oct 2000 A
6128657 Okanoya et al. Oct 2000 A
6154777 Ebrahim Nov 2000 A
6157950 Krishnan Dec 2000 A
6160874 Dickerman et al. Dec 2000 A
6170022 Linville et al. Jan 2001 B1
6178423 Douceur et al. Jan 2001 B1
6182139 Brendel Jan 2001 B1
6192051 Lipman et al. Feb 2001 B1
6233612 Fruchtman et al. May 2001 B1
6246684 Chapman et al. Jun 2001 B1
6253226 Chidambaran et al. Jun 2001 B1
6253230 Couland et al. Jun 2001 B1
6259405 Stewart et al. Jul 2001 B1
6260070 Shah Jul 2001 B1
6263368 Martin Jul 2001 B1
6289012 Harrington et al. Sep 2001 B1
6292832 Shah et al. Sep 2001 B1
6298380 Coile et al. Oct 2001 B1
6304913 Rune Oct 2001 B1
6311278 Raanan et al. Oct 2001 B1
6327622 Jindal et al. Dec 2001 B1
6330574 Murashita Dec 2001 B1
6336147 Brownell et al. Jan 2002 B1
6338082 Schneider Jan 2002 B1
6343324 Hubis et al. Jan 2002 B1
6347339 Morris et al. Feb 2002 B1
6353848 Morris Mar 2002 B1
6360270 Cherkasova et al. Mar 2002 B1
6363056 Beigi et al. Mar 2002 B1
6370527 Singhal Apr 2002 B1
6374300 Masters Apr 2002 B2
6389462 Cohen et al. May 2002 B1
6396833 Zhang et al. May 2002 B1
6411986 Susai et al. Jun 2002 B1
6430562 Kardos et al. Aug 2002 B1
6434081 Johnson et al. Aug 2002 B1
6446108 Rosenberg et al. Sep 2002 B1
6466580 Leung Oct 2002 B1
6469983 Narayana et al. Oct 2002 B2
6480476 Willars Nov 2002 B1
6484261 Wiegel Nov 2002 B1
6490624 Sampson et al. Dec 2002 B1
6510135 Almulhem et al. Jan 2003 B1
6510458 Berstis et al. Jan 2003 B1
6513061 Ebata et al. Jan 2003 B1
6514085 Slattery et al. Feb 2003 B2
6519643 Foulkes et al. Feb 2003 B1
6542936 Mayle et al. Apr 2003 B1
6560230 Li et al. May 2003 B1
6578069 Hopmann et al. Jun 2003 B1
6601084 Bhaskaran et al. Jul 2003 B1
6615267 Whalen et al. Sep 2003 B1
6631422 Althaus et al. Oct 2003 B1
6636503 Shiran et al. Oct 2003 B1
6636894 Short et al. Oct 2003 B1
6650640 Muller et al. Nov 2003 B1
6650641 Albert et al. Nov 2003 B1
6654346 Mahalingaiah et al. Nov 2003 B1
6654701 Hatley Nov 2003 B2
6661802 Homberg et al. Dec 2003 B1
6683873 Kwok et al. Jan 2004 B1
6691165 Bruck et al. Feb 2004 B1
6694517 James et al. Feb 2004 B1
6701415 Hendren, III Mar 2004 B1
6708187 Shanumgam et al. Mar 2004 B1
6708220 Olin Mar 2004 B1
6718380 Mohaban et al. Apr 2004 B1
6728704 Mao et al. Apr 2004 B2
6738357 Richter et al. May 2004 B1
6742045 Albert et al. May 2004 B1
6744776 Kalkunte et al. Jun 2004 B1
6751663 Farrell et al. Jun 2004 B1
6754215 Arikawa et al. Jun 2004 B1
6754228 Ludwig Jun 2004 B1
6754699 Swildens et al. Jun 2004 B2
6760337 Snyder, II et al. Jul 2004 B1
6760775 Anerousis et al. Jul 2004 B1
6772219 Shobatake Aug 2004 B1
6779039 Bommareddy et al. Aug 2004 B1
6781986 Sabaa et al. Aug 2004 B1
6795860 Shah Sep 2004 B1
6798777 Ferguson et al. Sep 2004 B1
6804542 Haartsen Oct 2004 B1
6816901 Sitaraman et al. Nov 2004 B1
6816977 Brakmo et al. Nov 2004 B2
6826698 Minkin et al. Nov 2004 B1
6829238 Tokuyo et al. Dec 2004 B2
6829649 Shorey Dec 2004 B1
6857009 Ferreria Feb 2005 B1
6862282 Oden Mar 2005 B1
6865593 Reshef et al. Mar 2005 B1
6868082 Allen, Jr. et al. Mar 2005 B1
6868447 Slaughter et al. Mar 2005 B1
6871221 Styles Mar 2005 B1
6876629 Beshai et al. Apr 2005 B2
6876654 Hegde Apr 2005 B1
6880017 Marce et al. Apr 2005 B1
6883137 Girardot et al. Apr 2005 B1
6888836 Cherkasova May 2005 B1
6904040 Salapura et al. Jun 2005 B2
6914881 Mansfield et al. Jul 2005 B1
6928082 Liu et al. Aug 2005 B2
6928518 Talagala Aug 2005 B2
6947985 Hegli et al. Sep 2005 B2
6950434 Viswanath et al. Sep 2005 B1
6954780 Susai et al. Oct 2005 B2
6957272 Tallegas et al. Oct 2005 B2
6959394 Brickell et al. Oct 2005 B1
6970475 Fraser et al. Nov 2005 B1
6970924 Chu et al. Nov 2005 B1
6973490 Robertson et al. Dec 2005 B1
6975592 Seddigh et al. Dec 2005 B1
6986040 Kramer et al. Jan 2006 B1
6987763 Rochberger et al. Jan 2006 B2
6990074 Wan et al. Jan 2006 B2
6990114 Erimli et al. Jan 2006 B1
7003564 Greuel et al. Feb 2006 B2
7006502 Lin Feb 2006 B2
7007092 Peiffer Feb 2006 B2
7020713 Shah et al. Mar 2006 B1
7023974 Brannam et al. Apr 2006 B1
7035212 Mittal et al. Apr 2006 B1
7039061 Connor et al. May 2006 B2
7058633 Gnagy et al. Jun 2006 B1
7065482 Shorey et al. Jun 2006 B2
7075924 Richter et al. Jul 2006 B2
7076689 Atkinson Jul 2006 B2
7080314 Garofalakis et al. Jul 2006 B1
7089491 Feinberg et al. Aug 2006 B2
7113993 Cappiello et al. Sep 2006 B1
7113996 Kronenberg Sep 2006 B2
7120666 McCanne et al. Oct 2006 B2
7133863 Teng et al. Nov 2006 B2
7133944 Song et al. Nov 2006 B2
7139792 Mishra et al. Nov 2006 B1
7155722 Hilla et al. Dec 2006 B1
7161904 Hussain et al. Jan 2007 B2
7185359 Schmidt et al. Feb 2007 B2
7191163 Herrera et al. Mar 2007 B2
7206282 Goldman et al. Apr 2007 B1
7228359 Monteiro Jun 2007 B1
7228422 Morioka et al. Jun 2007 B2
7236491 Tsao et al. Jun 2007 B2
7240100 Wein et al. Jul 2007 B1
7257633 Masputra et al. Aug 2007 B2
7283470 Sindhu et al. Oct 2007 B1
7287082 O'Toole, Jr. Oct 2007 B1
7292541 Cs Nov 2007 B1
7295827 Liu et al. Nov 2007 B2
7296263 Jacob Nov 2007 B1
7308475 Pruitt et al. Dec 2007 B1
7308703 Wright et al. Dec 2007 B2
7308709 Brezak et al. Dec 2007 B1
7310339 Powers et al. Dec 2007 B1
7319696 Inoue et al. Jan 2008 B2
7321926 Zhang et al. Jan 2008 B1
7324533 DeLiberato et al. Jan 2008 B1
7333999 Njemanze Feb 2008 B1
7340571 Saze Mar 2008 B2
7343413 Gilde et al. Mar 2008 B2
7349391 Ben-Dor et al. Mar 2008 B2
7373438 DeBergalis et al. May 2008 B1
7383570 Pinkas et al. Jun 2008 B2
7398552 Pardee et al. Jul 2008 B2
7409440 Jacob Aug 2008 B1
7433962 Janssen et al. Oct 2008 B2
7437478 Yokota et al. Oct 2008 B2
7454480 Labio et al. Nov 2008 B2
7490162 Masters Feb 2009 B1
7500243 Huetsch et al. Mar 2009 B2
7500269 Huotari et al. Mar 2009 B2
7505795 Lim et al. Mar 2009 B1
7516492 Nisbet et al. Apr 2009 B1
7522581 Acharya et al. Apr 2009 B2
7526541 Roese et al. Apr 2009 B2
7555608 Naik et al. Jun 2009 B2
7558197 Sindhu et al. Jul 2009 B1
7577723 Matsuda et al. Aug 2009 B2
7580971 Gollapudi et al. Aug 2009 B1
7590732 Rune Sep 2009 B2
7624424 Morita et al. Nov 2009 B2
7640347 Sloat et al. Dec 2009 B1
7644137 Bozak et al. Jan 2010 B2
7668166 Rekhter et al. Feb 2010 B1
7680915 Still et al. Mar 2010 B2
7684423 Tripathi et al. Mar 2010 B2
7689710 Tang et al. Mar 2010 B2
7698458 Liu et al. Apr 2010 B1
7706261 Sun et al. Apr 2010 B2
7724657 Rao et al. May 2010 B2
7725093 Sengupta et al. May 2010 B2
7778187 Chaturvedi et al. Aug 2010 B2
7801978 Susai et al. Sep 2010 B1
7808913 Ansari et al. Oct 2010 B2
7822839 Pruitt et al. Oct 2010 B1
7831662 Clark et al. Nov 2010 B2
7861085 Case et al. Dec 2010 B1
7876677 Cheshire Jan 2011 B2
7895653 Calo et al. Feb 2011 B2
7903554 Manur et al. Mar 2011 B1
7908245 Nakano et al. Mar 2011 B2
7908314 Yamaguchi et al. Mar 2011 B2
7925908 Kim Apr 2011 B2
7930365 Dixit et al. Apr 2011 B2
7933496 Livshits et al. Apr 2011 B2
7933946 Livshits et al. Apr 2011 B2
7945908 Waldspurger et al. May 2011 B1
7958222 Pruitt et al. Jun 2011 B1
7984141 Gupta et al. Jul 2011 B2
7984500 Khanna et al. Jul 2011 B1
8024443 Jacob Sep 2011 B1
8037528 Williams et al. Oct 2011 B2
8041022 Andreasen et al. Oct 2011 B1
8064342 Badger Nov 2011 B2
8069225 McCanne et al. Nov 2011 B2
8103781 Wu et al. Jan 2012 B1
8130650 Allen, Jr. et al. Mar 2012 B2
8149819 Kobayashi et al. Apr 2012 B2
8155128 Balyan et al. Apr 2012 B2
8171124 Kondamuru May 2012 B2
8189567 Kavanagh et al. May 2012 B2
8190769 Shukla et al. May 2012 B1
8199757 Pani et al. Jun 2012 B2
8205246 Shatzkamer et al. Jun 2012 B2
8239954 Wobber et al. Aug 2012 B2
8271620 Witchey Sep 2012 B2
8274895 Rahman et al. Sep 2012 B2
8321908 Gai et al. Nov 2012 B2
8351333 Rao et al. Jan 2013 B2
8380854 Szabo Feb 2013 B2
8396836 Ferguson et al. Mar 2013 B1
8417817 Jacobs Apr 2013 B1
8447871 Szabo May 2013 B1
8447970 Klein et al. May 2013 B2
8452876 Williams et al. May 2013 B1
8463850 McCann Jun 2013 B1
8464265 Worley Jun 2013 B2
8468247 Richardson et al. Jun 2013 B1
8468267 Yigang Jun 2013 B2
8521851 Richardson et al. Aug 2013 B1
8521880 Richardson et al. Aug 2013 B1
8359224 Henderson et al. Sep 2013 B2
8539224 Henderson et al. Sep 2013 B2
8560693 Wang et al. Oct 2013 B1
8566474 Kanode et al. Oct 2013 B2
8578050 Craig et al. Nov 2013 B2
8601000 Stefani et al. Dec 2013 B1
8606921 Vasquez et al. Dec 2013 B2
8615022 Harrison et al. Dec 2013 B2
8646067 Agarwal et al. Feb 2014 B2
8665868 Kay Mar 2014 B2
8665969 Kay Mar 2014 B2
8701179 Penno et al. Apr 2014 B1
8725836 Lowery et al. May 2014 B2
8726338 Narayanaswamy et al. May 2014 B2
8737304 Karuturi et al. May 2014 B2
8778665 Glide et al. Jul 2014 B2
8804504 Chen Aug 2014 B1
8819109 Krishnamurthy et al. Aug 2014 B1
8819419 Carlson et al. Aug 2014 B2
8819768 Koeten et al. Aug 2014 B1
8830874 Cho et al. Sep 2014 B2
8838817 Biswas Sep 2014 B1
8873753 Parker Oct 2014 B2
8875274 Montemurro et al. Oct 2014 B2
8879431 Ridel et al. Nov 2014 B2
8886981 Baumann et al. Nov 2014 B1
8908545 Chen et al. Dec 2014 B1
8954080 Janakiraman et al. Feb 2015 B2
8954492 Lowell, Jr. Feb 2015 B1
8959215 Koponen et al. Feb 2015 B2
9036529 Erickson et al. May 2015 B2
9037166 de Wit et al. May 2015 B2
9047259 Ho et al. Jun 2015 B1
9077554 Szabo Jul 2015 B1
9083760 Hughes et al. Jul 2015 B1
9137301 Dunlap Sep 2015 B1
9143451 Amdahl et al. Sep 2015 B2
9244843 Michels et al. Jan 2016 B1
9497614 Ridel et al. Nov 2016 B1
20010000083 Crow Mar 2001 A1
20010007560 Masuda et al. Jul 2001 A1
20010009554 Katseff et al. Jul 2001 A1
20010023442 Masters Sep 2001 A1
20020010757 Granik et al. Jan 2002 A1
20020010783 Primak et al. Jan 2002 A1
20020012352 Hansson et al. Jan 2002 A1
20020032758 Yen et al. Jan 2002 A1
20020032777 Kawata et al. Mar 2002 A1
20020038360 Andrews et al. Mar 2002 A1
20020046291 O'Callaghan et al. Apr 2002 A1
20020049842 Huetsch et al. Apr 2002 A1
20020059428 Susai et al. May 2002 A1
20020065848 Walker et al. May 2002 A1
20020072048 Slattery et al. Jun 2002 A1
20020083067 Tamayo et al. Jun 2002 A1
20020087571 Stapel et al. Jul 2002 A1
20020087744 Kitchin Jul 2002 A1
20020095498 Chanda et al. Jul 2002 A1
20020099829 Richards et al. Jul 2002 A1
20020099842 Jennings et al. Jul 2002 A1
20020103823 Jackson et al. Aug 2002 A1
20020112061 Shih et al. Aug 2002 A1
20020138615 Schmeling Sep 2002 A1
20020143819 Han et al. Oct 2002 A1
20020143852 Guo et al. Oct 2002 A1
20020161913 Gonzalez et al. Oct 2002 A1
20020162118 Levy et al. Oct 2002 A1
20020174216 Shorey et al. Nov 2002 A1
20020188753 Tang et al. Dec 2002 A1
20020194112 DePinto et al. Dec 2002 A1
20020194342 Lu et al. Dec 2002 A1
20020198956 Dunshea et al. Dec 2002 A1
20020198993 Cudd et al. Dec 2002 A1
20030005144 Engel et al. Jan 2003 A1
20030005172 Chessell Jan 2003 A1
20030009528 Sharif et al. Jan 2003 A1
20030018450 Carley Jan 2003 A1
20030018585 Butler et al. Jan 2003 A1
20030018927 Gadir et al. Feb 2003 A1
20030034905 Anton et al. Feb 2003 A1
20030037070 Marston Feb 2003 A1
20030046291 Fascenda Mar 2003 A1
20030046335 Doyle et al. Mar 2003 A1
20030051045 Connor Mar 2003 A1
20030055723 English Mar 2003 A1
20030065653 Overton et al. Apr 2003 A1
20030065951 Igeta et al. Apr 2003 A1
20030069918 Lu et al. Apr 2003 A1
20030069974 Lu et al. Apr 2003 A1
20030070069 Belapurkar et al. Apr 2003 A1
20030074301 Solomon Apr 2003 A1
20030086415 Bernhard et al. May 2003 A1
20030105807 Thompson et al. Jun 2003 A1
20030105846 Zhao et al. Jun 2003 A1
20030105983 Brakmo et al. Jun 2003 A1
20030108000 Chaney et al. Jun 2003 A1
20030108002 Chaney et al. Jun 2003 A1
20030108052 Inoue et al. Jun 2003 A1
20030120948 Schmidt et al. Jun 2003 A1
20030128708 Inoue et al. Jul 2003 A1
20030130945 Force Jul 2003 A1
20030131052 Allan Jul 2003 A1
20030139934 Mandera Jul 2003 A1
20030145062 Sharma et al. Jul 2003 A1
20030145233 Poletto et al. Jul 2003 A1
20030156586 Lee et al. Aug 2003 A1
20030163576 Janssen et al. Aug 2003 A1
20030179755 Fraser Sep 2003 A1
20030189936 Terrell et al. Oct 2003 A1
20030191812 Agarwalla et al. Oct 2003 A1
20030195813 Pallister et al. Oct 2003 A1
20030195962 Kikuchi et al. Oct 2003 A1
20030208596 Carolan et al. Nov 2003 A1
20030212954 Patrudu Nov 2003 A1
20030220835 Barnes, Jr. Nov 2003 A1
20030225485 Fritz et al. Dec 2003 A1
20030229665 Ryman Dec 2003 A1
20030236995 Fretwell, Jr. Dec 2003 A1
20040003287 Zissimopoulos et al. Jan 2004 A1
20040006591 Matsui et al. Jan 2004 A1
20040015783 Lennon et al. Jan 2004 A1
20040017825 Stanwood et al. Jan 2004 A1
20040030627 Sedukhin Feb 2004 A1
20040030740 Stelting Feb 2004 A1
20040043758 Sorvari et al. Mar 2004 A1
20040059789 Shum Mar 2004 A1
20040064544 Barsness et al. Apr 2004 A1
20040064554 Kuno et al. Apr 2004 A1
20040072569 Omae et al. Apr 2004 A1
20040093361 Therrien et al. May 2004 A1
20040103206 Hsu et al. May 2004 A1
20040103283 Hornak May 2004 A1
20040111523 Hall et al. Jun 2004 A1
20040111621 Himberger et al. Jun 2004 A1
20040117493 Bazot et al. Jun 2004 A1
20040122926 Moore et al. Jun 2004 A1
20040123277 Schrader et al. Jun 2004 A1
20040133605 Chang et al. Jul 2004 A1
20040138858 Carley Jul 2004 A1
20040141185 Akama Aug 2004 A1
20040151186 Akama Aug 2004 A1
20040167967 Bastian et al. Aug 2004 A1
20040177165 Masputra et al. Sep 2004 A1
20040192312 Li et al. Sep 2004 A1
20040199762 Carlson et al. Oct 2004 A1
20040210663 Phillips et al. Oct 2004 A1
20040213156 Smallwood et al. Oct 2004 A1
20040215665 Edgar et al. Oct 2004 A1
20040215746 McCanne et al. Oct 2004 A1
20040236826 Harville et al. Nov 2004 A1
20040243703 Demmer et al. Dec 2004 A1
20040255000 Simionescu et al. Dec 2004 A1
20040260745 Gage et al. Dec 2004 A1
20040264472 Oliver et al. Dec 2004 A1
20040264481 Darling et al. Dec 2004 A1
20040267920 Hydrie et al. Dec 2004 A1
20040267948 Oliver et al. Dec 2004 A1
20040268358 Darling et al. Dec 2004 A1
20050004887 Igakura et al. Jan 2005 A1
20050008017 Dana et al. Jan 2005 A1
20050021703 Cherry et al. Jan 2005 A1
20050021736 Carusi et al. Jan 2005 A1
20050027841 Rolfe Feb 2005 A1
20050027869 Johnson Feb 2005 A1
20050044158 Malik Feb 2005 A1
20050044213 Kobayashi et al. Feb 2005 A1
20050052440 Kim et al. Mar 2005 A1
20050055435 Gbadegesin et al. Mar 2005 A1
20050071283 Randle et al. Mar 2005 A1
20050078604 Yim Apr 2005 A1
20050117589 Douady et al. Jun 2005 A1
20050122942 Rhee et al. Jun 2005 A1
20050122977 Lieberman Jun 2005 A1
20050125553 Wu et al. Jun 2005 A1
20050154837 Keohane et al. Jul 2005 A1
20050165656 Frederick et al. Jul 2005 A1
20050174944 Legault et al. Aug 2005 A1
20050175013 Le Pennec et al. Aug 2005 A1
20050187866 Lee Aug 2005 A1
20050188220 Nilsson et al. Aug 2005 A1
20050198234 Leib et al. Sep 2005 A1
20050198310 Kim et al. Sep 2005 A1
20050213587 Cho et al. Sep 2005 A1
20050234928 Shkvarchuk et al. Oct 2005 A1
20050240664 Chen et al. Oct 2005 A1
20050246393 Coates et al. Nov 2005 A1
20050256806 Tien et al. Nov 2005 A1
20050262238 Reeves et al. Nov 2005 A1
20050273456 Revanuru et al. Dec 2005 A1
20050273645 Satran Dec 2005 A1
20050273843 Shigeeda Dec 2005 A1
20050288939 Peled et al. Dec 2005 A1
20060031374 Lu et al. Feb 2006 A1
20060031520 Bedekar et al. Feb 2006 A1
20060031778 Goodwin et al. Feb 2006 A1
20060036764 Yokota et al. Feb 2006 A1
20060045089 Sadler et al. Mar 2006 A1
20060045096 Farmer et al. Mar 2006 A1
20060047785 Wang et al. Mar 2006 A1
20060059267 Cugi et al. Mar 2006 A1
20060077902 Kannan et al. Apr 2006 A1
20060077986 Rune Apr 2006 A1
20060083205 Buddhikot et al. Apr 2006 A1
20060095573 Carle et al. May 2006 A1
20060100752 Kim et al. May 2006 A1
20060106802 Giblin et al. May 2006 A1
20060112176 Liu et al. May 2006 A1
20060112272 Morioka et al. May 2006 A1
20060112367 Harris May 2006 A1
20060123210 Pritchett et al. Jun 2006 A1
20060129684 Datta Jun 2006 A1
20060130133 Andreev et al. Jun 2006 A1
20060133374 Sekiguchi Jun 2006 A1
20060135198 Lee Jun 2006 A1
20060140193 Kakani et al. Jun 2006 A1
20060153201 Hepper et al. Jul 2006 A1
20060156416 Huotari et al. Jul 2006 A1
20060161577 Kulkarni et al. Jul 2006 A1
20060168070 Thompson et al. Jul 2006 A1
20060171365 Borella Aug 2006 A1
20060179153 Lee et al. Aug 2006 A1
20060182103 Martini et al. Aug 2006 A1
20060184647 Dixit et al. Aug 2006 A1
20060209669 Nishio Sep 2006 A1
20060209853 Hidaka et al. Sep 2006 A1
20060229861 Tatsuoka et al. Oct 2006 A1
20060230148 Forecast et al. Oct 2006 A1
20060233106 Achlioptas et al. Oct 2006 A1
20060235998 Stecher et al. Oct 2006 A1
20060242300 Yumoto et al. Oct 2006 A1
20060248194 Ly et al. Nov 2006 A1
20060259320 LaSalle et al. Nov 2006 A1
20060268692 Wright et al. Nov 2006 A1
20060268704 Ansari et al. Nov 2006 A1
20060270341 Kim et al. Nov 2006 A1
20060282442 Lennon et al. Dec 2006 A1
20060291483 Sela Dec 2006 A1
20060294054 Kudo et al. Dec 2006 A1
20070005807 Wong Jan 2007 A1
20070006293 Balakrishnan et al. Jan 2007 A1
20070016613 Foresti et al. Jan 2007 A1
20070016662 Desai et al. Jan 2007 A1
20070019636 Lau et al. Jan 2007 A1
20070019658 Park et al. Jan 2007 A1
20070038994 Davis et al. Feb 2007 A1
20070044060 Waller Feb 2007 A1
20070297410 Yoon et al. Feb 2007 A1
20070050843 Manville et al. Mar 2007 A1
20070058670 Konduru et al. Mar 2007 A1
20070064661 Sood et al. Mar 2007 A1
20070067373 Higgins et al. Mar 2007 A1
20070067771 Kulbak et al. Mar 2007 A1
20070083646 Miller et al. Apr 2007 A1
20070088822 Coile et al. Apr 2007 A1
20070104115 Decasper et al. May 2007 A1
20070106796 Kudo et al. May 2007 A1
20070107048 Halls et al. May 2007 A1
20070112775 Ackerman May 2007 A1
20070118879 Yeun May 2007 A1
20070124415 Lev-Ran et al. May 2007 A1
20070124502 Li May 2007 A1
20070130255 VVolovitz et al. Jun 2007 A1
20070147246 Hurley et al. Jun 2007 A1
20070162891 Bumer et al. Jul 2007 A1
20070168320 Borthakur et al. Jul 2007 A1
20070168525 DeLeon et al. Jul 2007 A1
20070174491 Still et al. Jul 2007 A1
20070192543 Naik et al. Aug 2007 A1
20070220598 Salowey et al. Sep 2007 A1
20070233809 Brownell et al. Oct 2007 A1
20070233826 Tindal et al. Oct 2007 A1
20070250560 Wein et al. Oct 2007 A1
20070258451 Bouat Nov 2007 A1
20070283023 Ly et al. Dec 2007 A1
20070288484 Yan et al. Dec 2007 A1
20070297551 Choi Dec 2007 A1
20080004022 Johannesson et al. Jan 2008 A1
20080008202 Terrell et al. Jan 2008 A1
20080010372 Khendouri et al. Jan 2008 A1
20080022059 Zimmerer et al. Jan 2008 A1
20080025297 Kashyap Jan 2008 A1
20080031258 Acharya et al. Feb 2008 A1
20080034136 Ulenas Feb 2008 A1
20080072303 Syed Mar 2008 A1
20080120370 Chan et al. May 2008 A1
20080120592 Tanguay et al. May 2008 A1
20080133518 Kapoor et al. Jun 2008 A1
20080133771 Vardi Jun 2008 A1
20080134311 Medvinsky et al. Jun 2008 A1
20080141246 Kuck et al. Jun 2008 A1
20080148340 Powell et al. Jun 2008 A1
20080159145 Muthukrishnan et al. Jul 2008 A1
20080165801 Sheppard Jul 2008 A1
20080172488 Jawahar Jul 2008 A1
20080178278 Grinstein et al. Jul 2008 A1
20080201599 Ferraiolo et al. Aug 2008 A1
20080205613 Lopez Aug 2008 A1
20080208917 Smoot et al. Aug 2008 A1
20080209524 Almog Aug 2008 A1
20080222646 Sigal et al. Sep 2008 A1
20080225710 Raja et al. Sep 2008 A1
20080228911 Mackey Sep 2008 A1
20080229025 Plamondon Sep 2008 A1
20080229415 Kapoor et al. Sep 2008 A1
20080235508 Ran et al. Sep 2008 A1
20080239986 Ku et al. Oct 2008 A1
20080253395 Pandya Oct 2008 A1
20080256224 Kaji et al. Oct 2008 A1
20080263401 Stenzel Oct 2008 A1
20080270578 Zhang et al. Oct 2008 A1
20080279200 Shatzkamer et al. Nov 2008 A1
20080281908 McCanne et al. Nov 2008 A1
20080281944 Vome et al. Nov 2008 A1
20080282354 Wobber et al. Nov 2008 A1
20080288661 Galles Nov 2008 A1
20080301760 Lim Dec 2008 A1
20080316922 Riddle et al. Dec 2008 A1
20090028337 Balabine et al. Jan 2009 A1
20090037998 Adhya et al. Feb 2009 A1
20090049230 Pandya Feb 2009 A1
20090070617 Arimilli et al. Mar 2009 A1
20090077619 Boyce Mar 2009 A1
20090080440 Balyan et al. Mar 2009 A1
20090089487 Kwon et al. Apr 2009 A1
20090094311 Awadallah et al. Apr 2009 A1
20090094610 Sukirya Apr 2009 A1
20090097480 Curtis et al. Apr 2009 A1
20090106413 Salo et al. Apr 2009 A1
20090119504 van Os et al. May 2009 A1
20090125496 Wexler et al. May 2009 A1
20090125532 Wexler et al. May 2009 A1
20090125625 Shim et al. May 2009 A1
20090125955 DeLorme May 2009 A1
20090138314 Bruce May 2009 A1
20090138749 Moll et al. May 2009 A1
20090141891 Boyen et al. Jun 2009 A1
20090144286 Chatley et al. Jun 2009 A1
20090157678 Turk Jun 2009 A1
20090161542 Ho Jun 2009 A1
20090187915 Chew et al. Jul 2009 A1
20090193126 Agarwal et al. Jul 2009 A1
20090193513 Agarwal et al. Jul 2009 A1
20090196282 Fellman et al. Aug 2009 A1
20090217163 Jaroker Aug 2009 A1
20090217386 Schneider Aug 2009 A1
20090228956 He et al. Sep 2009 A1
20090241176 Beletski et al. Sep 2009 A1
20090248870 Kamei et al. Oct 2009 A1
20090248893 Richardson et al. Oct 2009 A1
20090265396 Ram et al. Oct 2009 A1
20090265467 Peles Oct 2009 A1
20090287935 Aull et al. Nov 2009 A1
20090289828 Hinchey Nov 2009 A1
20090292957 Bower et al. Nov 2009 A1
20090296624 Ryu et al. Dec 2009 A1
20090300161 Pruitt et al. Dec 2009 A1
20090300407 Kamath et al. Dec 2009 A1
20090316708 Yahyaoui et al. Dec 2009 A1
20090319600 Sedan et al. Dec 2009 A1
20100011434 Kay Jan 2010 A1
20100017846 Huang et al. Jan 2010 A1
20100023582 Pedersen et al. Jan 2010 A1
20100042743 Jeon et al. Feb 2010 A1
20100061232 Zhou et al. Mar 2010 A1
20100064001 Daily Mar 2010 A1
20100070476 O'Keefe et al. Mar 2010 A1
20100071048 Novak et al. Mar 2010 A1
20100093318 Zhu et al. Apr 2010 A1
20100103820 Fuller et al. Apr 2010 A1
20100115236 Bataineh et al. May 2010 A1
20100122091 Huang et al. May 2010 A1
20100131654 Malakapalli et al. May 2010 A1
20100150154 Viger et al. Jun 2010 A1
20100154031 Montemurro et al. Jun 2010 A1
20100165877 Shukla et al. Jul 2010 A1
20100179984 Sebastian Jul 2010 A1
20100188976 Rahman et al. Jul 2010 A1
20100189052 Kavanagh et al. Jul 2010 A1
20100228814 McKenna et al. Sep 2010 A1
20100228819 Wei Sep 2010 A1
20100242092 Harris et al. Sep 2010 A1
20100250497 Redlich et al. Sep 2010 A1
20100251330 Kroeselberg et al. Sep 2010 A1
20100261479 Hidaka Oct 2010 A1
20100274772 Samuels Oct 2010 A1
20100278733 Karsten et al. Nov 2010 A1
20100284476 Potkonjak Nov 2010 A1
20100299451 Yigang et al. Nov 2010 A1
20100306169 Pishevar et al. Dec 2010 A1
20100306827 Esteve Balducci et al. Dec 2010 A1
20100322250 Shelly et al. Dec 2010 A1
20100325277 Muthiah et al. Dec 2010 A1
20110040889 Garrett et al. Feb 2011 A1
20110047620 Mahaffey et al. Feb 2011 A1
20110055921 Narayanaswamy et al. Mar 2011 A1
20110066718 Susai et al. Mar 2011 A1
20110066736 Mitchell et al. Mar 2011 A1
20110072321 Dhuse Mar 2011 A1
20110075592 Beecroft Mar 2011 A1
20110075667 Li et al. Mar 2011 A1
20110078303 Li et al. Mar 2011 A1
20110098087 Tseng Apr 2011 A1
20110107077 Henderson et al. May 2011 A1
20110113095 Hatami-Hanza May 2011 A1
20110153822 Rajan et al. Jun 2011 A1
20110153985 Pafumi et al. Jun 2011 A1
20110154443 Thakur et al. Jun 2011 A1
20110185065 Stanisic et al. Jun 2011 A1
20110173295 Bakke et al. Jul 2011 A1
20110184733 Yu et al. Jul 2011 A1
20110185082 Thompson Jul 2011 A1
20110188415 Graziano Aug 2011 A1
20110197059 Klein et al. Aug 2011 A1
20110202676 Craig et al. Aug 2011 A1
20110213911 Eldus et al. Sep 2011 A1
20110225302 Park Sep 2011 A1
20110246800 Accpadi et al. Oct 2011 A1
20110273984 Hsu et al. Nov 2011 A1
20110277016 Hockings et al. Nov 2011 A1
20110282700 Cockcroft Nov 2011 A1
20110282997 Prince et al. Nov 2011 A1
20110314178 Kanode et al. Dec 2011 A1
20110321122 Mwangi et al. Dec 2011 A1
20120016994 Nakamura et al. Jan 2012 A1
20120030341 Jensen et al. Feb 2012 A1
20120039341 Latif et al. Feb 2012 A1
20120041965 Vasquez et al. Feb 2012 A1
20120063314 Pignataro et al. Mar 2012 A1
20120066489 Ozaki et al. Mar 2012 A1
20120079055 Robinson Mar 2012 A1
20120094631 Pattabiraman Apr 2012 A1
20120101952 Raleigh et al. Apr 2012 A1
20120102011 Matsuki et al. May 2012 A1
20120117028 Gold et al. May 2012 A1
20120124372 Dilley et al. May 2012 A1
20120137020 Ehlers May 2012 A1
20120150805 Pafumi et al. Jun 2012 A1
20120158988 Fatehpuria Jun 2012 A1
20120191847 Nas et al. Jul 2012 A1
20120195273 Iwamura et al. Aug 2012 A1
20120198043 Hesketh et al. Aug 2012 A1
20120224531 Karuturi et al. Sep 2012 A1
20120254293 Winter et al. Oct 2012 A1
20120257506 BazIamacci et al. Oct 2012 A1
20120258766 Cho et al. Oct 2012 A1
20120311153 Morgan Dec 2012 A1
20120311174 Bichot Dec 2012 A1
20120317266 Abbott Dec 2012 A1
20130003106 Lowery et al. Jan 2013 A1
20130029726 Berionne et al. Jan 2013 A1
20130031060 Lowery et al. Jan 2013 A1
20130058229 Casado et al. Mar 2013 A1
20130073713 Collin et al. Mar 2013 A1
20130091002 Christie et al. Apr 2013 A1
20130114497 Zhang et al. May 2013 A1
20130163758 Swaminathan et al. Jun 2013 A1
20130182713 Giacomoni et al. Jul 2013 A1
20130198322 Oran et al. Aug 2013 A1
20130205361 Narayanaswamy et al. Aug 2013 A1
20130238472 Fan et al. Sep 2013 A1
20130290492 ElArabawy Oct 2013 A1
20130336122 Baruah et al. Dec 2013 A1
20130339519 Lientz Dec 2013 A1
20140025823 Szabo et al. Jan 2014 A1
20140040478 Hsu et al. Feb 2014 A1
20140059678 Parker Feb 2014 A1
20140071895 Bane et al. Mar 2014 A1
20140095661 Knowles et al. Apr 2014 A1
20140099945 Singh et al. Apr 2014 A1
20140105069 Potnuru Apr 2014 A1
20140162705 de Wit et al. Jun 2014 A1
20140171089 Janakiraman et al. Jun 2014 A1
20140187199 Yan et al. Jul 2014 A1
20140269484 Dankberg et al. Sep 2014 A1
20140286316 Park et al. Sep 2014 A1
20140317404 Carlson et al. Oct 2014 A1
20140379910 Saxena Dec 2014 A1
20150058595 Gura et al. Feb 2015 A1
20150189010 van Bemmel Jun 2015 A1
20160006634 Li Jan 2016 A1
20160028855 Goyal et al. Jan 2016 A1
Foreign Referenced Citations (26)
Number Date Country
2080530 Apr 1994 CA
0605088 Feb 1996 EP
0744850 Nov 1996 EP
1081918 Aug 2000 EP
2489735 Oct 2012 GB
6205006 Jul 1994 JP
821924 Mar 1996 JP
2000183935 Jun 2000 JP
WO 9114326 Sep 1991 WO
WO 9505712 Feb 1995 WO
WO 9709805 Mar 1997 WO
WO 9745800 Dec 1997 WO
WO 9905829 Feb 1999 WO
WO 9906913 Feb 1999 WO
WO 9910858 Mar 1999 WO
WO 9939373 Aug 1999 WO
WO 9964967 Dec 1999 WO
WO 0004422 Jan 2000 WO
WO 0004458 Jan 2000 WO
0058870 Mar 2000 WO
WO 0058870 Mar 2000 WO
200239696 May 2002 WO
WO 200239696 May 2002 WO
2006091040 Aug 2006 WO
WO 2006091040 Aug 2006 WO
WO 2012136828 Oct 2012 WO
Non-Patent Literature Citations (86)
Entry
F5 Networks Inc., “Using F5's-DNS Controller to Provide High Availability Between Two or More Data Centers”, F5 Networks Inc., Aug. 2001, pp. 1-4, Seattle, Washington, (http://www.f5.com/f5products/3dns/relatedMaterials/3DNSRouting.html).
F5 Networks Inc., “Case Information Log for ‘Issues with BoNY upgrade to 4.31’”, as early as Feb. 2008.
Ilvesjmaki M., et al., “On the capabilities of application level traffic measurements to differentiate and classify Internet traffic”, Presented in SPIE's International Symposium ITcom, Aug. 19-21, 2001, pp. 1-11, Denver, Colorado.
LaMonica M., “Infravio spills up Web services registry idea”, CNET News.com, May 11, 2004, pp. 1-2, (http://www.news.com).
Macvittie, Lori, “Message-Based Load Balancing,” Technical Brief, Jan. 2010, pp. 1-9, F5 Networks, Inc.
“Respond to server depending on TCP::client_port”, DevCentral Forums iRules, pp. 1-6, last accessed Mar. 26, 2010, (http://devcentral.f5.com/Default/aspx?tabid=53&forumid=5&tpage=1&v).
Sommers F., “Whats New in UDDI 3.0—Part 3”, Web Services Papers, Sep. 2, 2003, pp. 1-4, (http://www.webservices.org/index.php/article/articleprint/894/-1/24/).
Woo T.Y.C., “A Modular Approach to Packet Classification: Algorithms and Results”, Nineteenth Annual Conference of the IEEE Computer and Communications Societies 3(3):1213-22, Mar. 26-30, 2000, abstract only, (http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=832499).
U.S. Appl. No. 13/307,923, filed Nov. 20, 2011, entitled “Methods for Content Inlining and Devices Thereof,” Inventor, G. Lowell, Jr.
U.S. Appl. No. 13/473,589, filed May 16, 2012, entitled “Method for Load Balancing of Requests' Processing of Diameter Servers,” Inventors, L. Ridel et al.
U.S. Appl. No. 13/771,538, filed Feb. 20, 2013, entitled “Methods for Improving Flow Cache Bandwidth Utilization and Devices Thereof,” Inventors, T. Michaels et al.
U.S. Appl. No. 14/032,329, filed Sep. 20, 2013, entitled “System and Method for Smart Load Balancing,” Inventors, R. Masters et al.
U.S. Appl. No. 14/038,433, filed Sep. 26, 2013, entitled “System and Method for Scaling a TCP Stack Across Multiple Cores Via Advertised Window Partitioning,” Inventor, S. Amdahl.
U.S. Appl. No. 14/081,700, filed Nov. 15, 2013 entitled, “Methods to Ensure Load Balancing of NFS Requests to NAS Cluster and Devices Thereof,” Inventor, B. McCann.
U.S. Appl. No. 14/194,268, filed Feb. 28, 2014, entitled “National Traffic Steering Device for a Better Control of a Specific Wireless/LTE Network,” Inventors, L. Ridel et al.
U.S. Appl. No. 14/139,228, dated Dec. 23, 2013, entitled “Methods for Improving Network Traffic Management Device Performance and Devices Thereof,” Inventors, S. Lewites et al.
U.S. Appl. No. 14/144,137, filed Dec. 30, 2013, entitled “System and Method for Utiliziing a Data Reducing Module for Dictionary Compression of Encoded Data,” Inventor, S. Amdahl.
U.S. Appl. No. 14/042,237, filed Sep. 30, 2013, entitled “System and Method for Utilizing a Data Reducing Module for Dictionary Compression of Encoded Data,” Inventor, S. Amdahl.
“A Process for Selective Routing of Servlet Content to Transcoding Modules,” Research Disclosure 422124, Jun. 1999, pp. 889-890, IBM Corporation.
“Big-IP Controller with Exclusive OneConnect Content Switching Feature Provides a Breakthrough System for Maximizing Server and Network Performance,” F5 Networks, Inc. Press Release, May 8, 2001, 2 pages, F5 Network, Las Vegas, Nevada.
“Diameter MBLB Support Phase 2: Generic Message Based Load Balancing (GMBLB)”, last accessed Mar. 29, 2010, pp. 1-10, (http://peterpan.f5net.com/twiki/bin/view/TMOS/TMOSDiameterMBLB).
“Market Research & Releases, CMPP PoC documentation”, last accessed Mar. 29, 2010, (http://mainstreet/sites/PD/Teams/ProdMgmt/MarketResearch/Universal).
“Market Research & Releases, Solstice Diameter Requirements”, last accessed Mar. 29, 2010, (http://mainstreet/sites/PD/Teams/ProdMgmt/MarketResearch/Unisversal).
“Respond to server depending on TCP::client_port”, DevCentral Forums iRules, pp. 1-6, last accessed Mar. 26, 2010, (http://devcentral.f5.com/Default/aspx?tabid=53&forumid=5&tpage=l&v).
“Servlet/Applet/HTML Authentication Process With Single Sign-On,” Research Disclosure 429128, Jan. 2000, pp. 163-164, IBM Corporation.
“Traffic Surges; Surge Queue; Netscaler Defense,” 2005, PowerPoint Presentation, slides 1-12, Citrix Systems, Inc.
“UDDI Overview”, Sep. 6, 2000, pp. 1-21, uddi.org, (http://www.uddi.org/).
“UDDI Technical White Paper,” Sep. 6, 2000, pp. 1-12, uddi-org, (http://www.uddi.org/).
“UDDI Version 3.0.1”, UDDI Spec Technical Committee Specification, Oct. 14, 2003, pp. 1-383, uddi.org, (http://www.uddi.org/).
“Windows Server 2003 Kerberos Extensions,” Microsoft TechNet, 2003 (Updated Jul. 31, 2004), http://technet.microsoft.com/en-us/library/cc738207, Microsoft Corporation.
Abad, C., et al., “An Analysis on the Schemes for Detecting and Preventing ARP Cache Poisoning Attacks”, IEEE, Computer Society, 27th International Conference on Distributed Computing Systems Workshops (ICDCSW'07), 2007, pp. 1-8.
Baer, T., et al., “The elements of Web services” ADTmag.com, Dec. 1, 2002, pp. 1-6, (http://www.adtmag.com).
Blue Coat, “Technology Primer: CIFS Protocol Optimization,” Blue Coat Systems Inc., 2007, last accessed: Dec. 9, 2013, pp. 1-3, (http://www.bluecoat.com).
Borovick, Lucinda, “Addressing WAN Optimization in the Integrated Services Router”, White Paper, Sponsored by: Cisco Systems, Oct. 2010, pgs. 1-11, IDC.
Cisco Systems, “Cisco Performance Routing (PfR)”, PfR: Technology_Overview, 2010, pp. 1-23.
Cisco Systems, “Cisco Performance Routing”, Data Sheet, 2010, pp. 1-10.
Cisco Systems, “Cisco Wide Area Application Services Software Version 4.4 Technical Overview”, White Paper, 2011, pp. 1-24.
Crescendo Networks, “Application Layer Processing (ALP),” 2003-2009, pp. 168-186, Chapter 9, CN-5000E/5500E, Foxit Software Company.
F5 Networks Inc., “3-DNS® Reference Guide, version 4.5”, F5 Networks Inc., Sep. 2002, pp. 2-1-2-28, 3-1-3-12, 5-1-5-24, Seattle, Washington.
F5 Networks Inc., “Big-IP® Reference Guide, version 4.5”, F5 Networks Inc., Sep. 2002, pp. 11-1-11-32, Seattle, Washington.
F5 Networks Inc., “Case Information Log for ‘Issues with BoNY upgrade to 4.3’”, as early as Feb. 2008.
F5 Networks Inc., “Configuration Guide for Local Traffic Management,” F5 Networks Inc., Jan. 2006, version 9.2.2, 406 pgs.
F5 Networks Inc., “Deploying the Big-IP LTM for Diameter Traffic Management,” F5® Deployment Guide, Publication date Sep. 2010, Version 1.2, pp. 1-19.
F5 Networks Inc., “F5 Diameter RM”, Powerpoint document, Jul. 16, 2009, pp. 1-7.
F5 Networks Inc., “F5 WANJet CIFS Acceleration”, White Paper, F5 Networks Inc., Mar. 2006, pp. 1-5, Seattle, Washington.
F5 Networks Inc., “Routing Global Internet Users to the Appropriate Data Center and Applications Using F5's 3-DNS Controller”, F5 Networks Inc., Aug. 2001, pp. 1-4, Seattle, Washington, (http://www.f5.com/f5producs/3dns/relatedMaterials/UsingF5.html).
F5 Networks Inc., “Using F5's 3-DNS Controller to Provide High Availability Between Two or More Data Centers”, F5 Networks Inc., Aug. 2001, pp. 1-4, Seattle, Washington, (http://www.f5.com/f5products/3dns/relatedMaterials/3DNSRouting.html).
F5 Networks, Inc., “Big-IP® Local Traffic Manager™: Implementations”, F5 Networks, Inc., Jul. 8, 2015, Version 11.6, pp. 1-340.
Fajardo V., “Open Diameter Software Architecture,” Jun. 25, 2004, pp. 1-6, Version 1.0.7.
Fielding et al., “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, RFC: 2068, Jan. 1997, pp. 1-162.
Fielding et al., “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, RFC: 2616, Jun. 1999, pp. 1-176, The Internet Society.
Floyd et al., “Random Early Detection Gateways for Congestion Avoidance,”Aug. 1993, pp. 1-22, IEEE/ACM Transactions on Networking, California..
Gupta et al., “Algorithms for Packet Classification”, Computer Systems Laboratory, Stanford University, CA, Mar./Apr. 2001, pp. 1-29.
Heinz G., “Priorities in Stream Transmission Control Protocol (SCTP) Multistreaming”, Thesis submitted to the Faculty of the University of Delaware, Spring 2003, pp. 1-35.
Hochmuth, Phil, “F5, CacheFlow pump up content-delivery lines,” Network World Fusion, May 4, 2001, 1 page, Las Vegas, Nevada.
Ilvesmaki M., et al., “On the capabilities of application level traffic measurements to differentiate and classify Internet traffic”, Presented in SPIE's International Symposium ITcom, Aug. 19-21, 2001, pp. 1-11, Denver, Colorado.
International Search Report and The Written Opinion, for International Patent Application No. PCT/US2013/026615, dated Jul. 4, 2013.
International Search Report and The Written Opinion, for International Patent Application No. PCT/US2011/058469, dated May 30, 2012.
Internet Protocol,“DARPA Internet Program Protocol Specification”, (RFC:791), Information Sciences Institute, University of Southern California, Sep. 1981, pp. 1-49.
Kawamoto, D., “Amazon files for Web services patent”, CNET News.com, Jul. 28, 2005, pp. 1-2, (http://news.com).
LaMonica M., “Infravio spiffs up Web services registry idea”, CNET News.com, May 11, 2004, pp. 1-2, (http://www.news.com).
MacVitte, Lori., “Message-Based Load Balancing” F5 Technical Brief, pp. 1-9, 2009.
MacVittie, L., “Why Not Network-Side Pre-Fetching?,” 8 pp. (Apr. 14, 2009).
Modiano E., “Scheduling Algorithms for Message Transmission Over a Satellite Broadcast System,” MIT Lincoln Laboratory Advanced Network Group, Nov. 1997, pp. 1-7.
Nichols K., et al., “Definition of the Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”, (RFC:2474) Network Working Group, Dec. 1998, pp. 1-19, (http://www.ietf.org/rfc/rfc2474.txt).
Ott D., et al., “A Mechanism for TCP-Friendly Transport-level Protocol Coordination”, USENIX Annual Technical Conference, Jun. 10, 2002, University of North Carolina at Chapel Hill, pp. 1-12.
OWASP, “Testing for Cross site scripting”, OWASP Testing Guide v2, Table of Contents, Feb. 24, 2011, pp. 1-5, (www.owasp.org/index.php/Testing_for_Cross_site_scripting).
Padmanabhan V., et al., “Using Predictive Prefetching to Improve World Wide Web Latency”, SIGCOM, Jul. 1, 1996, pp. 1-15.
Raghavan B., et al., “Cloud Control with Distributed Rate Limiting”, SIGCOMM'07, Aug. 27-31, 2007, pp. 1-11, Department of Computer Science and Engineering, University of California, San Diego, CA.
Riverbed Technology, “Riverbed Certified Solutions Professional (RCSP) Study Guide, Exam 199-01 for RiOS v5.0”, Aug. 2009, Version 2.0.2, see pp. 30-67.
Riverbed Technology, “Riverbed Optimization System (RiOS) 6.1, A Technical Overview”, White Paper, 2009, pp. 1-27.
Rosen E., et al., “MPLS Label Stack Encoding”, (RFC:3032) Network Working Group, Jan. 2001, pp. 1-22, (http://www.ietf.org/rfc/rfc3032.txt).
Schaefer, Ken, “IIS and Kerberos Part 5—Protocol Transition, Constrained Delegation, S4U2S and S4U2P,” Jul. 18, 2007, 21 pages, http://www.adopenstatic.com/cs/blogs/ken/archive/2007/07/19/8460.aspx.
Schilit B., “Bootstrapping Location—Enhanced Web Services”, University of Washington, Dec. 4, 2003, (http://www.cs.washington.edu/news/colloq.info.html).
Seeley R., “Can Infravio technology revive UDDI?”, ADTmag.com, Oct. 22, 2003, (http://www.adtmag.com).
Shohoud, Y., “Building XML Web Services with VB .NET and VB 6”, Addison Wesley, Sep. 2002, pp. 1-14.
Sleeper B., “The Evolution of UDDI”, UDDI.org White Paper, The Stencil Group, Inc., Jul. 19, 2002, pp. 1-15, San Francisco, California.
Sleeper B., “Why UDDI Will Succeed, Quietly: Two Factors Push Web Services Forward”, The Stencil Group, Inc., Apr. 2001, pp. 1-7, San Francisco, California.
Snoeren A., et al., “Managing Cloud Resources:Distributed Rate Limited”, Building and Programming the Cloud Workshop, Jan. 13, 2010, pp. 1-38, UCSDCSE Computer Science and Engineering.
Sommers F., “Whats New in Uddi 3.0—Part 1”, Web Services Papers, Jan. 27, 2003, pp. 1-4, (http://www.webservices.org/index.php/article/articleprint/871/-1/24/).
Sommers F., “Whats New in UDDI 3.0—Part 2”, Web Services Papers, Mar. 2, 2003, pp. 1-8, (http://www.web.archive.org/web/20040620131006/).
Sommers F., “Whats New in UDDI 3.0—Part 3”, Web Services Papers, Sep. 2, 2003, pp. 1-4, (http://www.webservices.org/index.php/article/articleprint/894/-1/244.
Wang B., “Priority and realtime data transfer over the best-effort Internet”, Dissertation Abstract,ScholarWorks@UMASS, Sep. 2005, pp. i-xiv and pp. 1-9.
Wikipedia, “Diameter (protocol)”, pp. 1-11, last accessed Oct. 27, 2010, (http://en.wikipedia.org/wiki/Diameter_(protocol)).
Williams et al., “The Ultimate Windows Server 2003 System Administrator's Guide: Forwarding Authentication,” 2003, 2 pages, Figure 10.7, Addison-Wesley Professional, Boston, Massachusetts.
Woo T.Y.C., “A Modular Approach to Packet Classification: Algorithms and Results”, Nineteenth Animal Conference of the IEEE Computer and Communications Societies 3(3):1213-22, Mar. 26-30, 2000, abstract only, (http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=832499).
Provisional Applications (1)
Number Date Country
61905011 Nov 2013 US