Methods and systems for optimizing network traffic using preemptive acknowledgment signals

Information

  • Patent Grant
  • 8806053
  • Patent Number
    8,806,053
  • Date Filed
    Tuesday, April 29, 2008
    16 years ago
  • Date Issued
    Tuesday, August 12, 2014
    10 years ago
Abstract
Methods and systems for efficient transmission of data between a requesting computer and a server. A request is received for server data from a requesting computer and the request is sent to the server over at least one network. The requested server data responsive to the request is forwarded on to the requesting computer. It is determined whether the requested server data has been previously forwarded either to the requesting computer or at least one other requesting computer. A preemptive acknowledgement signal is sent to the transmitting server substantially upon determining the requested server data has been previously forwarded for causing the transmitting server to cease transmitting any remaining, un-transmitted portions of the requested server data. These methods and systems increase the efficiency of transmission resources in a network.
Description
FIELD OF THE INVENTION

This invention generally relates to optimizing data transmission over a network and more particularly, to a method and systems for optimizing the transmission of data to streamline network performance via preemptive acknowledgement signals.


BACKGROUND

With the widespread use of network based applications and the advent of the need to transmit larger amounts of data in the form of video or audio files, concerns have been raised with straining network resources in the routine transfer of data between networked computers. Currently such requests may be made for data to a web based server via a normal http request. The data is sent with certain information such as checksums to confirm the receipt of all of the data intended to be sent. Once the entirety of the requested data is received, the receiving computer sends an acknowledgment signal to the sending computer.


However, certain data is sent repetitively such as audio or video files that may be used repeatedly by certain applications. Although the receiving computer may already have received the requested data, it continues to request the same data when running certain applications that will reuse the received data. Thus the receiving computer receives the same data thus using network transmission resources unnecessarily. The large amount of unnecessary data transmission creates bottlenecks in network systems therefore slowing down service and responses to other server requests. For example, visiting a website may result in sending a flash file to the client's browser which may be cached. After visiting another page, the user may return to the website and subsequently have to request the flash file again from the web server.


SUMMARY

According to one example, a method is disclosed for efficient transmission of data between a requesting computer and a server. A request is received for server data from a requesting computer and the request is sent to the server over at least one network. The requested server data responsive to the request is forwarded on to the requesting computer. It is determined whether the requested server data has been previously forwarded to the requesting computer or at least one other requestor. A preemptive acknowledgement signal is sent to the transmitting server substantially upon determining the requested server data has been previously forwarded for causing the transmitting server to cease transmitting any remaining, un-transmitted portions of the requested server data.


Another example disclosed is a machine readable medium having stored thereon instructions for increasing data flow in at least one network. The stored instructions comprise machine executable code, which when executed by at least one machine processor, causes the machine to accept a request for server data from a requesting computer over at least one network. The stored instructions further cause the machine to send the request for the server data to a server that stores the requested server data. The stored instructions further cause the machine to forward the requested server data responsive to the request to the requesting computer. The stored instructions further cause the machine to determine whether the requested server data has been previously forwarded to the requesting computer or at least one other requestor. The stored instructions further cause the machine to send a preemptive acknowledgement signal to the transmitting server substantially upon determining the requested server data has been previously forwarded for causing the transmitting server to cease transmitting any remaining, un-transmitted portions of the requested server data.


Another example disclosed is a system for efficient transmission of data. The system includes a requesting computer and a server coupled to the requesting computer via at least one network, the server storing server data. A network traffic optimization application module is interposed between the requesting computer and the server. The module receives a request for the server data from the requesting computer and sends the request to the server. The module forwards the requested server data responsive to the request on to the requesting computer. The module determines whether the requested server data has been previously forwarded either to the requesting computer or at least one other requesting computer. The module sends a preemptive acknowledgement signal to the transmitting server substantially upon determining the requested server data has been previously forwarded for causing the transmitting server to cease transmitting any remaining, un-transmitted portions of the requested server data.


Another example disclosed is a traffic management device for interposition between a requesting computer and a server. The traffic management device includes a first interface that receives a request for data from the requesting computer and sends the request to the server over a network. The device includes a second interface that obtains the requested data responsive to the request. A third interface forwards the requested data to the requesting computer. A controller determines whether the requested server data has been previously forwarded either to the requesting computer or at least one other requesting computer and sends a preemptive acknowledgement signal to the transmitting server substantially upon determining the requested server data has been previously forwarded for causing the transmitting server to cease transmitting any remaining, un-transmitted portions of the requested server data.


Additional aspects will be apparent to those of ordinary skill in the art in view of the detailed description of various embodiments, which is made with reference to the drawings, a brief description of which is provided below.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of a network system using one example of an network traffic optimization application;



FIG. 2 is a block diagram of the example traffic management device running the network traffic optimization application in the network system in FIG. 1;



FIG. 3 is a flow chart of methods for data optimization of the data requests performed by the example network system in FIG. 1; and



FIG. 4 is a block diagram of the data optimization process shown in FIGS. 2 and 3.





While these examples are susceptible of embodiment in many different forms, there is shown in the drawings and will herein be described in detail preferred examples with the understanding that the present disclosure is to be considered as an exemplification and is not intended to limit the broad aspect to the embodiments illustrated.


DETAILED DESCRIPTION

Currently, the ability to terminate redundant data transmission from sending servers is limited because the acknowledgment signals indicating that data is received cannot be sent by the receiving computer until the entirety of the requested data has been received. The result is that data is sent to the receiving computer that made the request, despite the receiving computer already having the data, resulting in wasted transmission resources for sending data that is already available to the receiving computer.



FIG. 1 is a block diagram of an example system 100 that may allow for efficient data transmission using deterministic acknowledgment signals from client computers in a network that employs a proxy device. The system 100 may provide responses and requests according to the HTTP based application protocol in this example, but the principles discussed herein are not limited to this example and can include other application and network protocols. The system 100 may include a series of one or more private client computers 102, one or more private servers 104, and at least one traffic management device 106. In this example, traffic management device 106 is logically interposed between the private client computers 102 and the private servers 104 in the private network 108, although other arrangements may be used in other network environments. The private client computers 102, in this example, may run web browsers, which may provide user interfaces allowing private computer 102 users to make data requests over private network 108 to different web server based applications operating on private servers 104, for instance, although the data requests can instead or in addition be made by public client computer 102′ users in public network 108′ to private servers 104 in the private network 108′ and/or the data requests can instead or in addition be made by private client computer 102 users in private network 108 to public servers 104′ in the public network 108′.


In this example, the private network 108 is a local area network (LAN) environment employing any suitable interface mechanisms and communications technologies including, for example telecommunications in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Ethernet based Packet Data Networks (PDNs), combinations thereof, and the like. Moreover, private network 108 may be made up of one or more interconnected LANs located in substantially the same geographic location or geographically separated, although network 108 may include other types of networks arranged in other configurations. Moreover, private network 108 may include one or more additional intermediary and/or network infrastructure devices in communication with each other via one or more wired and/or wireless network links, such as switches, routers, modems, or gateways (not shown), and the like, as well as other types of network devices including network storage devices. Further, system 100 includes public network 108′, which interconnects and includes public client computers 102′ and public servers 104′ to the private network 108 by via the at least one traffic management device 106. Moreover, public network 108′ may include any publicly accessible network environment, such as the Internet, which includes network components, such as public server 104, that are not directly managed or under direct control by traffic management device 106, yet whose operation may still be influenced in unique, novel and unexpected ways in response to TCP/IP protocol directives strategically purposefully determined and sent from the traffic management device 106 to make the private network 108, and perhaps the public network 108′, operate more efficiently, as will be described in greater detail herein. It should be noted, however, that the ensuing descriptions of the various functionalities relating to the private clients 102 and private server 104 are applicable to the public clients 102′ and the public servers 104′, respectively, and thus the remaining description will simply refer to either one as clients 102 and/or servers 104 unless noted otherwise.


In this example, the server 104 may run a web page server application, such as a streaming media video application. It is to be understood that the server 104 may be hardware or software or may represent a system with multiple servers which may include internal networks. In this example the server 104 may be any version of Microsoft® IIS servers or Apache® servers, although other types of servers may be used. Further, additional servers may be coupled to the system 100 and many different types of applications may be available on servers coupled to the system 100.


The traffic management device 106 may be interposed between the server 104 and the client computers 102 as shown in FIG. 1. The traffic management device 106 may provide connections established between the servers 104 and the requesting client computers 102. From the perspective of the client computers 102, they have directly established a connection in the usual way to the appropriate server 104 and respective server applications. The existence of the proxy connection is entirely transparent to the requesting client computer 102. The implementation of such a proxy may be performed with known address spoofing techniques to assure transparency, although other methods could be used. The traffic management device 106 may provide high availability of IP applications/services running across multiple servers. The traffic management device 106 may distribute requests from the client computers 102 according to business policies, data center conditions and network conditions to ensure availability of the applications running on the server 104. An example of the traffic management device 106 is the BIG-IP® product available from F5 Networks, Inc. of Seattle, Wash., although other traffic management devices could be used.


As will be detailed below, the traffic management device 106 may receive one or more data requests for the server applications running on the server 104 from the client computers 102. The requests may include header data that provide certain identification and routing data from the requesting client computer 102. The traffic management device 106 may route the requests to the server 104 for the requested application. The traffic management device 106 may receive the acknowledgment that the requests have been fulfilled from the requesting client computer 102. The appropriate server application on server 104 may then terminate or cease the transmission of data.


The efficient data transmission of the system 100 may be based on a protocol that allows deterministic acknowledgement signals such as the Transmission Control Protocol (TCP) of the Internet. Deterministic acknowledgment signals acknowledge the data on receipt and may be sent prior to when that the data transmission has completed. The traffic management device 106 may send a preemptive acknowledgment to the server 104 as soon as it is determined that a block of data or entire requested data file is already present either at the traffic management device 106 or at the requesting client computer 102. The traffic management device 106 may then send an acknowledgment signal initiated by itself or the client computer 102 to the server 104 that stops the sending of server data, thus clearing up bandwidth across the system 100.



FIG. 2 is a block diagram showing the efficient transmission of server data that may use an acknowledge command from the traffic management device 106 or the requesting client computer 102. The traffic management device 106 may include a controller 200 that runs a network traffic optimization application 202. It is to be understood that the network traffic optimization application 202 may be a module of another application such as a local traffic manager application that runs on the traffic management device 106. The network traffic optimization application 202 may also run on the requesting client computer 102 in FIG. 1. The network traffic optimization application 202 may have access to a buffer memory 204. The buffer memory 204 may be used as a cache to hold various data that has been received for transmission to either the server 104 or the requesting client computers 102. The traffic management device 106 may have a client interface 206 that may send responses to and receive requests from client computers such as the client computers 102 through the network 108 in FIG. 1. In this example, the client interface 206 may be an Ethernet connection. The traffic management device 106 may have a server interface 208 that may send requests to and receive responses from connected servers such as the server 104 in FIG. 1.


The traffic management device 106 and the receiving client computer 102 may become aware that requested data is already present in internal memory such as the buffer memory 204 in the traffic management device 106. In FIG. 2, a request may be made via the client computer 102 for a large data file 220 from the server 104. For example, the large data file 220 may be a video file. In response to the request, the server 104 may begin sending the data file 220. In FIG. 2, the data file 220 may be broken into several data blocks 220A-220F which reside in a stack 222 that acts as a send buffer. The data blocks 220A-220F thus may be stored in the stack 222 during the transmission of the data file 220 to the requesting client computer 102. The server 104 may begin to send the data file 220 by sending the first data block 220A from the stack 222 as shown in FIG. 2. Once the server 104 receives an acknowledgment from the client computer 102 or the traffic management device 106 indicating that the entire data file 220 has been received, the server 104 may discard the remaining amount of data file by pushing it out of the internal buffers of the stack 222. The received acknowledgment signal may cause the transmission of the data file to the client computer 102 to end thus freeing up resources such as the stack 222 and bandwidth from the server 104.


In this example, the traffic management device 106 may become aware of the contents of the entire data file 220 by reading an identifying key from the first data block 220A. Such an identifying key may be included in information relating to the requested data contained in the header in the first data block. In this example, the header may include a hash value derived from the data block. The traffic management device 106 may then perform a “pessimistic” lookup of the contents of the buffer 204 while receiving an additional data block 220B. The “pessimistic” lookup is defined as a lookup that is performed without interrupting the underlying operation. Thus, in this example, the lookup is performed while continuing the receiving of the requested data. This comparison may be made using the hash value to determine whether the requested data is already stored. If a match is found, the traffic management device 106 may stop the transmission of the remainder of the data file 220 from the server 104 by sending an acknowledgment signal to the server 104. In this example, the traffic management device 106 may store the data blocks 220A-220F of the data file 220 from a previous request in the buffer 204 and thus the traffic management device 106 may send an acknowledgement signal to the server 104 using the identification of the client computer 102. Since the data already exists, the remainder of the requested data (data blocks 220C-220F) may instead be sent by the traffic management device 106 from the stored data in the buffer 204 and therefore additional data blocks from the server 104 are unnecessary to fulfill the request. If a match were not found, there is no penalty for the subsequent trip time of the additional data (data blocks 220C-220F) from the server 104.


An alternate process may be a network traffic optimization application installed on the client computer 102 in FIG. 1. In the case where the traffic management device 106 is not a part of the system 100, or where the data requested is not in the buffer 204 of the traffic management device 106, the client computer 102 may have access to a cache with the requested data. In such a case, the client computer 102 may send a deterministic acknowledgement signal to the server 104 to terminate the transmission of the data prior to the sending of all of the requested data.


Each of the client computers 102, server 104, and the traffic management device 106 may include a central processing unit (CPU), controller or processor, a memory, and an interface system which are coupled together by a bus or other link, although other numbers and types of each of the components and other configurations and locations for the components can be used. The processors in the client computers 102, server 104 and the traffic management device 106 may execute a program of stored instructions for one or more aspects of the methods and systems as described herein, including for increasing data transmission efficiency, although the processor could execute other types of programmed instructions. The memory may store these programmed instructions for one or more aspects of the methods and systems as described herein, including the method for increasing the transmission efficiency, although some or all of the programmed instructions could be stored and/or executed elsewhere. A variety of different types of memory storage devices, such as a random access memory (RAM) or a read only memory (ROM) in the system or a floppy disk, hard disk, CD ROM, DVD ROM, or other computer readable medium which is read from and/or written to by a magnetic, optical, or other reading and/or writing system that is coupled to the processor, may be used for the memory. The user input device may comprise a computer keyboard and a computer mouse, although other types and numbers of user input devices may be used. The display may comprise a computer display screen, such as a CRT or LCD screen by way of example only, although other types and numbers of displays could be used.


Although an example of the client computers 102, server 104 and traffic management device 106 are described and illustrated herein in connection with FIGS. 1 and 2, each of the computers of the system 100 could be implemented on any suitable computer system or computing device. It is to be understood that the example devices and systems of the system 100 are for exemplary purposes, as many variations of the specific hardware and software used to implement the system 100 are possible, as will be appreciated by those skilled in the relevant art(s).


Furthermore, each of the systems of the system 100 may be conveniently implemented using one or more general purpose computer systems, microprocessors, digital signal processors, micro-controllers, application specific integrated circuits (ASIC), programmable logic devices (PLD), field programmable logic devices (FPLD), field programmable gate arrays (FPGA) and the like, programmed according to the teachings as described and illustrated herein, as will be appreciated by those skilled in the computer, software and networking arts.


In addition, two or more computing systems or devices may be substituted for any one of the systems in the system 100. Accordingly, principles and advantages of distributed processing, such as redundancy, replication, and the like, also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the system 100. The system 100 may also be implemented on a computer system or systems that extend across any network environment using any suitable interface mechanisms and communications technologies including, for example telecommunications in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, a combination thereof, and the like.


The operation of the example network traffic optimization application 202, shown in FIG. 2, which may be run on the traffic management device 106, will now be described with reference back to FIG. 1 in conjunction with the flow diagram shown in FIG. 3. The flow diagram in FIG. 3 is representative of example machine readable instructions for implementing the data transmission process. In this example, the machine readable instructions comprise an algorithm for execution by: (a) a processor, (b) a controller, and/or (c) one or more other suitable processing device(s). The algorithm may be embodied in software stored on tangible media such as, for example, a flash memory, a CD-ROM, a floppy disk, a hard drive, a digital video (versatile) disk (DVD), or other memory devices, but persons of ordinary skill in the art will readily appreciate that the entire algorithm and/or parts thereof could alternatively be executed by a device other than a processor and/or embodied in firmware or dedicated hardware in a well known manner (e.g., it may be implemented by an application specific integrated circuit (ASIC), a programmable logic device (PLD), a field programmable logic device (FPLD), a field programmable gate array (FPGA), discrete logic, etc.). For example, any or all of the components of the server 104 could be implemented by software, hardware, and/or firmware. Also, some or all of the machine readable instructions represented by the flowchart of FIG. 3 may be implemented manually. Further, although the example algorithm is described with reference to the flowchart illustrated in FIG. 3, persons of ordinary skill in the art will readily appreciate that many other methods of implementing the example machine readable instructions may alternatively be used. For example, the order of execution of the blocks may be changed, and/or some of the blocks described may be changed, eliminated, or combined.


Returning to FIG. 3, a request for server data may be initially received from the client computer 102 by the client interface 206 of the traffic management device 106 via the network 112 in FIG. 1 (300). The network traffic optimization application 202 of the traffic management device 106 may pass along the request for data to the server 104 that locates the requested data (302). The server 104 may place the server data requested in the stack 222 and may begin to send the first responsive block of data (304). The traffic management device 106 may read the header of the first block of data and may begin performing lookups to determine whether the data already is stored (indicating the server data has been previously transmitted to the requesting computer), while sending the first block of data to the client computer 102 (306). The traffic management device 106 may take the information relating to the requested data and, from the lookups, determine whether the requested data blocks exist in the buffer 204 (308). If the requested data blocks do not exist, the traffic management device 106 may continue to allow the transmission of the remainder of the data blocks from the server 104 (310). The receiving client computer 102 may determine whether the next data block received is the last data block (312). If the data block is the last block, the client computer 102 may send an ACK signal to the server 104 (314). The server 104 may empty the stack 222 and cease sending the data blocks on receiving the ACK signal (316).


If the traffic management device 106 determines that the requested data are already stored in the buffer 204 (308), the traffic management device 106 may send a preemptive ACK signal to the server 104 (318). The server 104, on receiving the preemptive ACK signal may cease transmitting any remaining data blocks to the traffic management device 106 (320). The server 104 may then purge the data blocks in the stack 222 thus freeing up the stack 222 for use with other server tasks. Thus, if the data requested exists in the traffic management device 106, greater data transmission efficiency is achieved by sending a deterministic acknowledgment signal to the server 104. For example, the server 104 may proceed to transmit additional responsive data associated with the data file after terminating the transmitting of the responsive data file. In larger data files, the ACK signal may also include information relating to the data blocks in the requesting computer matching the requested data. In such a case, the server 104 could be instructed to transmit only data blocks that are part of the requested data file, but are not available to the requesting computer.



FIG. 4 is a block diagram of a sequence showing a comparison between the use of a deterministic acknowledgement with already cached data and where the data is not cached on the receiving computer. In the example shown in FIG. 4, the buffer 222 contains data blocks 220A-220F described above in conjunction with FIG. 2. The receiving computer (traffic management device 106) may receive the data block 220A. The receiving computer may recognize the data block 220A as part of a stored data file. The receiving computer may then send a deterministic acknowledgment signal back to the server 104 and the server 104 may stop sending the data. In this example, the acknowledgment signal may be received while the server 104 is sending the data block 220C. Thus, the server 104 may devote the stack resources ordinarily devoted to sending data blocks 220D-220F to other tasks.


It is to be understood that the receiving computer that already stores the requested data in the above example is the traffic management device 106 but the receiving computer that already stores the requested data may be the actual client computer 102 in FIG. 1. Thus, the processes described above may also be applied to systems that do not have a traffic management device. Further, although the examples discussed relate to the Internet Protocol, the processes discussed may be implemented with any protocol that offers deterministic acknowledgment signals that allow acknowledgment of data upon receipt and offering the opportunity of an application to identify data prior to its full delivery.


Having thus described the basic concepts, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the examples. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.

Claims
  • 1. A method for efficient transmission of data between a requesting computer and a server, the method comprising: receiving, at a network traffic management device, a request for server data comprising a plurality of data packets from a requesting computer and sending the request to the server over at least one network;receiving a first portion of the server data from the server at the network traffic management device;determining, at the network traffic management device, whether a remaining portion of the server data, not yet received at the network traffic management device in response to the request, is stored at the network traffic management device based on the received first portion of the server data;sending a preemptive acknowledgement signal from the network traffic management device to the server, subsequent to receiving the first portion of the server data, to cause the server to terminate transmission of the remaining portion of the server data, when the remaining portion of the sever data is determined to be stored at the network traffic management device; andsending the remaining portion of the server data stored at the network traffic management device to the requesting computer.
  • 2. The method in claim 1, wherein the determining whether the remaining portion of the server data is stored includes reading a header of one or more of the plurality of data packets of the received first portion of the server data for information associated with the server data.
  • 3. The method of claim 2, wherein the header includes a unique value associated with the server data and wherein the unique value is used in determining whether the remaining portion of the server data is stored.
  • 4. The method in claim 1, wherein the preemptive acknowledgment signal includes information relating to the data packets from the requesting computer matching the requested server data.
  • 5. A non-transitory machine readable medium having stored thereon instructions for increasing data flow in a network, the stored instructions comprising machine executable code, which when executed by at least one processor in a network traffic management device, causes the processor to perform steps comprising receiving a request for server data from the requesting computer and sending the request to the server over the network;receiving a first portion of the server data from the server;determining whether a remaining portion of the server data, not yet received at the network traffic management device in response to the request, is stored at the network traffic management device based on the received first portion of the server data;sending a preemptive acknowledgement signal from the network traffic management device to the server, subsequent to receiving the first portion of the server data, to cause the server to terminate transmission of the remaining portion of the server data, when the remaining portion of the sever data is determined to be stored at the network traffic management device; andsending the remaining portion of the server data stored at the network traffic management device to the requesting computer.
  • 6. The machine readable medium of claim 5, wherein the determining whether the remaining portion of the server data is stored includes reading a header of one or more of the plurality of data packets of the received first portion of the server data for information associated with the server data.
  • 7. The machine readable medium of claim 6, wherein the header includes a unique value associated with the server data and wherein the unique value is used in determining whether the remaining portion of the server data is stored.
  • 8. The machine readable medium of claim 5, wherein the preemptive acknowledgment signal includes information relating to the data in the requesting computer matching the requested server data.
  • 9. A network traffic management device comprising: a network interface configured to receive and transmit data packets between a requesting computer and a server device over a network;a memory having stored thereon code embodying machine executable programmable instructions for increasing data flow in the network; anda processor configured to execute the stored programming instructions in the memory to perform steps comprising: receiving a request for server data from the requesting computer and sending the request to the server over the network;receiving a first portion of the server data from the server;determining whether a remaining portion of the server data, not yet received at the network traffic management device in response to the request, is stored at the network traffic management device based on the received first portion of the server data;sending a preemptive acknowledgement signal from the network traffic management device to the server, subsequent to receiving the first portion of the server data, to cause the server to terminate transmission of the remaining portion of the server data, when the remaining portion of the sever data is determined to be stored at the network traffic management device; andsending the remaining portion of the server data stored at the network traffic management device to the requesting computer.
  • 10. The device of claim 9, wherein the determining whether the remaining portion of the server data is stored includes reading a header of one or more of the plurality of data packets of the received first portion of the server data for information associated with the server data.
  • 11. The device of claim 10, wherein the header includes a unique value associated with the server data and wherein the unique value is used in determining whether the remaining portion of the server data is stored.
  • 12. The device of claim 9, wherein the preemptive acknowledgment signal includes information relating to the data in the requesting computer matching the requested server data.
US Referenced Citations (211)
Number Name Date Kind
3950735 Patel Apr 1976 A
4644532 George et al. Feb 1987 A
4897781 Chang et al. Jan 1990 A
4965772 Daniel et al. Oct 1990 A
5023826 Patel Jun 1991 A
5053953 Patel Oct 1991 A
5299312 Rocco, Jr. Mar 1994 A
5327529 Fults et al. Jul 1994 A
5367635 Bauer et al. Nov 1994 A
5371852 Attanasio et al. Dec 1994 A
5406502 Haramaty et al. Apr 1995 A
5475857 Dally Dec 1995 A
5517617 Sathaye et al. May 1996 A
5519694 Brewer et al. May 1996 A
5519778 Leighton et al. May 1996 A
5521591 Arora et al. May 1996 A
5528701 Aref Jun 1996 A
5581764 Fitzgerald et al. Dec 1996 A
5596742 Agarwal et al. Jan 1997 A
5606665 Yang et al. Feb 1997 A
5611049 Pitts Mar 1997 A
5663018 Cummings et al. Sep 1997 A
5752023 Choucri et al. May 1998 A
5761484 Agarwal et al. Jun 1998 A
5768423 Aref et al. Jun 1998 A
5774660 Brendel et al. Jun 1998 A
5790554 Pitcher et al. Aug 1998 A
5802052 Venkataraman Sep 1998 A
5812550 Sohn et al. Sep 1998 A
5825772 Dobbins et al. Oct 1998 A
5875296 Shi et al. Feb 1999 A
5892914 Pitts Apr 1999 A
5892932 Kim Apr 1999 A
5919247 Van Hoff et al. Jul 1999 A
5936939 Des Jardins et al. Aug 1999 A
5941988 Bhagwat et al. Aug 1999 A
5946690 Pitts Aug 1999 A
5949885 Leighton Sep 1999 A
5951694 Choquier et al. Sep 1999 A
5959990 Frantz et al. Sep 1999 A
5974460 Maddalozzo, Jr. et al. Oct 1999 A
5983281 Ogle et al. Nov 1999 A
5988847 McLaughlin et al. Nov 1999 A
6006260 Barrick, Jr. et al. Dec 1999 A
6006264 Colby et al. Dec 1999 A
6026452 Pitts Feb 2000 A
6028857 Poor Feb 2000 A
6051169 Brown et al. Apr 2000 A
6078956 Bryant et al. Jun 2000 A
6085234 Pitts et al. Jul 2000 A
6092196 Reiche Jul 2000 A
6108703 Leighton et al. Aug 2000 A
6111876 Frantz et al. Aug 2000 A
6128279 O'Neil et al. Oct 2000 A
6128657 Okanoya et al. Oct 2000 A
6170022 Linville et al. Jan 2001 B1
6178423 Douceur et al. Jan 2001 B1
6182139 Brendel Jan 2001 B1
6192051 Lipman et al. Feb 2001 B1
6233612 Fruchtman et al. May 2001 B1
6246684 Chapman et al. Jun 2001 B1
6253226 Chidambaran et al. Jun 2001 B1
6253230 Couland et al. Jun 2001 B1
6263368 Martin Jul 2001 B1
6289012 Harrington et al. Sep 2001 B1
6298380 Coile et al. Oct 2001 B1
6327622 Jindal et al. Dec 2001 B1
6343324 Hubis et al. Jan 2002 B1
6347339 Morris et al. Feb 2002 B1
6360270 Cherkasova et al. Mar 2002 B1
6374300 Masters Apr 2002 B2
6396833 Zhang et al. May 2002 B1
6430562 Kardos et al. Aug 2002 B1
6434081 Johnson et al. Aug 2002 B1
6484261 Wiegel Nov 2002 B1
6490624 Sampson et al. Dec 2002 B1
6510135 Almulhem et al. Jan 2003 B1
6510458 Berstis et al. Jan 2003 B1
6519643 Foulkes et al. Feb 2003 B1
6601084 Bhaskaran et al. Jul 2003 B1
6636503 Shiran et al. Oct 2003 B1
6636894 Short et al. Oct 2003 B1
6650640 Muller et al. Nov 2003 B1
6650641 Albert et al. Nov 2003 B1
6654701 Hatley Nov 2003 B2
6683873 Kwok et al. Jan 2004 B1
6691165 Bruck et al. Feb 2004 B1
6708187 Shanumgam et al. Mar 2004 B1
6742045 Albert et al. May 2004 B1
6751663 Farrell et al. Jun 2004 B1
6754228 Ludwig Jun 2004 B1
6760775 Anerousis et al. Jul 2004 B1
6772219 Shobatake Aug 2004 B1
6779039 Bommareddy et al. Aug 2004 B1
6781986 Sabaa et al. Aug 2004 B1
6798777 Ferguson et al. Sep 2004 B1
6816901 Sitaraman et al. Nov 2004 B1
6829238 Tokuyo et al. Dec 2004 B2
6868082 Allen, Jr. et al. Mar 2005 B1
6876629 Beshai et al. Apr 2005 B2
6876654 Hegde Apr 2005 B1
6888836 Cherkasova May 2005 B1
6928082 Liu et al. Aug 2005 B2
6950434 Viswanath et al. Sep 2005 B1
6954780 Susai et al. Oct 2005 B2
6957272 Tallegas et al. Oct 2005 B2
6975592 Seddigh et al. Dec 2005 B1
6987763 Rochberger et al. Jan 2006 B2
7007092 Peiffer Feb 2006 B2
7113993 Cappiello et al. Sep 2006 B1
7139792 Mishra et al. Nov 2006 B1
7228422 Morioka et al. Jun 2007 B2
7283470 Sindhu et al. Oct 2007 B1
7287082 O'Toole, Jr. Oct 2007 B1
7308703 Wright et al. Dec 2007 B2
7321926 Zhang et al. Jan 2008 B1
7333999 Njemanze Feb 2008 B1
7343413 Gilde et al. Mar 2008 B2
7349391 Ben-Dor et al. Mar 2008 B2
7398552 Pardee et al. Jul 2008 B2
7454480 Labio et al. Nov 2008 B2
7490162 Masters Feb 2009 B1
7500269 Huotari et al. Mar 2009 B2
7526541 Roese et al. Apr 2009 B2
7558197 Sindhu et al. Jul 2009 B1
7580971 Gollapudi et al. Aug 2009 B1
7624424 Morita et al. Nov 2009 B2
7668166 Rekhter et al. Feb 2010 B1
7706261 Sun et al. Apr 2010 B2
7724657 Rao et al. May 2010 B2
7801978 Susai et al. Sep 2010 B1
7876677 Cheshire Jan 2011 B2
7908314 Yamaguchi et al. Mar 2011 B2
8130650 Allen, Jr. et al. Mar 2012 B2
8199757 Pani et al. Jun 2012 B2
8351333 Rao et al. Jan 2013 B2
8380854 Szabo Feb 2013 B2
8447871 Szabo May 2013 B1
20010023442 Masters Sep 2001 A1
20020059428 Susai et al. May 2002 A1
20020138615 Schmeling Sep 2002 A1
20020161913 Gonzalez et al. Oct 2002 A1
20020198993 Cudd et al. Dec 2002 A1
20030046291 Fascenda Mar 2003 A1
20030070069 Belapurkar et al. Apr 2003 A1
20030086415 Bernhard et al. May 2003 A1
20030108052 Inoue et al. Jun 2003 A1
20030145062 Sharma et al. Jul 2003 A1
20030145233 Poletto et al. Jul 2003 A1
20030225485 Fritz et al. Dec 2003 A1
20040003287 Zissimopoulos et al. Jan 2004 A1
20040103283 Hornak May 2004 A1
20040117493 Bazot et al. Jun 2004 A1
20040267920 Hydrie et al. Dec 2004 A1
20040268358 Darling et al. Dec 2004 A1
20050004887 Igakura et al. Jan 2005 A1
20050021736 Carusi et al. Jan 2005 A1
20050044213 Kobayashi et al. Feb 2005 A1
20050052440 Kim et al. Mar 2005 A1
20050055435 Gbadegesin et al. Mar 2005 A1
20050122977 Lieberman Jun 2005 A1
20050154837 Keohane et al. Jul 2005 A1
20050187866 Lee Aug 2005 A1
20050188220 Nilsson et al. Aug 2005 A1
20050262238 Reeves et al. Nov 2005 A1
20060031520 Bedekar et al. Feb 2006 A1
20060059267 Cugi et al. Mar 2006 A1
20060156416 Huotari et al. Jul 2006 A1
20060161577 Kulkarni et al. Jul 2006 A1
20060171365 Borella Aug 2006 A1
20060233106 Achlioptas et al. Oct 2006 A1
20060242300 Yumoto et al. Oct 2006 A1
20070016662 Desai et al. Jan 2007 A1
20070064661 Sood et al. Mar 2007 A1
20070083646 Miller et al. Apr 2007 A1
20070107048 Halls et al. May 2007 A1
20070118879 Yeun May 2007 A1
20070174491 Still et al. Jul 2007 A1
20070220598 Salowey et al. Sep 2007 A1
20070297551 Choi Dec 2007 A1
20080034136 Ulenas Feb 2008 A1
20080072303 Syed Mar 2008 A1
20080133518 Kapoor et al. Jun 2008 A1
20080134311 Medvinsky et al. Jun 2008 A1
20080148340 Powell et al. Jun 2008 A1
20080201599 Ferraiolo et al. Aug 2008 A1
20080256224 Kaji et al. Oct 2008 A1
20080301760 Lim Dec 2008 A1
20090028337 Balabine et al. Jan 2009 A1
20090049230 Pandya Feb 2009 A1
20090119504 van Os et al. May 2009 A1
20090125625 Shim et al. May 2009 A1
20090138749 Moll et al. May 2009 A1
20090141891 Boyen et al. Jun 2009 A1
20090228956 He et al. Sep 2009 A1
20090287935 Aull et al. Nov 2009 A1
20100023582 Pedersen et al. Jan 2010 A1
20100071048 Novak et al. Mar 2010 A1
20100122091 Huang et al. May 2010 A1
20100150154 Viger et al. Jun 2010 A1
20100242092 Harris et al. Sep 2010 A1
20100251330 Kroeselberg et al. Sep 2010 A1
20100325277 Muthiah et al. Dec 2010 A1
20110040889 Garrett et al. Feb 2011 A1
20110047620 Mahaffey et al. Feb 2011 A1
20110066718 Susai et al. Mar 2011 A1
20110173295 Bakke et al. Jul 2011 A1
20110273984 Hsu et al. Nov 2011 A1
20110282997 Prince et al. Nov 2011 A1
20110321122 Mwangi et al. Dec 2011 A1
20120066489 Ozaki et al. Mar 2012 A1
Foreign Referenced Citations (12)
Number Date Country
0744850 Nov 1996 EP
9114326 Sep 1991 WO
9505712 Feb 1995 WO
9905829 Feb 1997 WO
9709805 Mar 1997 WO
9745800 Dec 1997 WO
9906913 Feb 1999 WO
9910858 Mar 1999 WO
9939373 Aug 1999 WO
9964967 Dec 1999 WO
0004422 Jan 2000 WO
0004458 Jan 2000 WO
Non-Patent Literature Citations (14)
Entry
Crescendo Networks, “Application Layer Processing (ALP),” 2003-2009, pp. 168-186, Chapter 9, CN-5000E/5500E, Foxit Software Company.
“A Process for Selective Routing of Servlet Content to Transcoding Modules,” Research Disclosure 422124, Jun. 1999, pp. 889-890, IBM Corporation.
“Big-IP Controller with Exclusive OneConnect Content Switching Feature Provides a Breakthrough System for Maximizing Server and Network Performance,” F5 Networks, Inc. Press Release, May 8, 2001, 2 pages, Las Vegas, Nevada.
Fielding et al., “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, RFC: 2068, Jan. 1997, pp. 1-162
Fielding et al., “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, RFC: 2616, Jun. 1999, pp. 1-176.
Floyd et al., “Random Early Detection Gateways for Congestion Avoidance,” Aug. 1993, pp. 1-22, IEEE/ACM Transactions on Networking, California.
Hochmuth, Phil, “F5, CacheFlow pump up content-delivery lines,” Network World Fusion, May 4, 2001, 1 page, Las Vegas, Nevada.
“Servlet/Applet/HTML Authentication Process With Single Sign-On,” Research Disclosure 429128, Jan. 2000, pp. 163-164, IBM Corporation.
“Traffic Surges; Surge Queue; Netscaler Defense,” 2005, PowerPoint Presentation, slides 1-12, Citrix Systems, Inc.
Macvittie, Lori, “Message-Based Load Balancing,” Technical Brief, Jan. 2010, pp. 1-9, F5 Networks, Inc.
F5 Networks Inc., “Configuration Guide for Local Traffic Management,” F5 Networks Inc., Jan. 2006, version 9.2.2, 406 pgs.
Abad, C., et al., “An Analysis on the Schemes for Detecting and Preventing ARP Cache Poisoning Attacks”, IEEE, Computer Society, 27th International Conference on Distributed Computing Systems Workshops (ICDCSW'07), 2007, pp. 1-8.
OWASP, “Testing for Cross site scripting”, OWASP Testing Guide v2, Table of Contents, Feb. 24, 2011, pp. 1-5, (www.owasp.org/index.php/Testing—for—Cross—site—scripting).
International Search Report for International Patent Application No. PCT/US2013/026615 (Jul. 4, 2013).