Selectively enabling packet concatenation based on a transaction boundary

Information

  • Patent Grant
  • 8559313
  • Patent Number
    8,559,313
  • Date Filed
    Friday, September 9, 2011
    13 years ago
  • Date Issued
    Tuesday, October 15, 2013
    11 years ago
Abstract
A system, apparatus, and method are directed towards selectively combining data into a packet to modify a number of packets transmitted over a network based on a detection of a transaction boundary. If it is determined to concatenate the data, such concatenation may continue until an acknowledgement (ACK) is received, or a predetermined amount of data is concatenated in the packet, or a transaction boundary is detected. If at least one of these conditions is satisfied, concatenation may be inhibited, and the packet may be sent. Concatenation is then re-enabled. In one embodiment, Nagle's algorithm is used for concatenating data into a packet. In one embodiment, an ACK may be sent based on a write completion indicator included within a packet. Receipt of the ACK may disable concatenation.
Description
BACKGROUND OF THE INVENTION

The invention relates generally to communicating content over a network, and more particularly but not exclusively to selectively determining whether to concatenate data into a packet based on a transaction boundary.


Nagle's algorithm concatenates data into a packet to modify a number of packets transmitted over a network. Nagle's algorithm typically concatenates data into a packet until an acknowledgement (ACK) is received or a predetermined amount of data is concatenated in the packet. Subsequently, the packet is transmitted over the network. Nagle's algorithm, named after John Nagle, is described in Request for Comments (RFC) 896 (available at http://www.faqs.org/rfcs/rfc896.html).


Some TCP stacks may also implement a Delayed ACK algorithm. The Delayed ACK algorithm is directed towards minimizing network traffic when many small packets may require many ACK responses. The Delayed ACK algorithm sends an ACK after every two packets have been received, or based on a timing event. In one embodiment, the Delayed ACK algorithm may send an ACK after 200 msec has elapsed, if it has not detected the receipt of a second packet. The Delayed ACK algorithm is described in RFC 1122 (available at http://www.faqs.org/rfcs/rfc1122.html).


While the use of Nagle's algorithm may improve behavior for a network, interactions between Nagle's algorithm and the Delayed ACK algorithm may cause an extra delay in a sending of a packet. Because Nagle's algorithm waits for an ACK before sending a packet, the operation of the Delayed ACK algorithm may cause an undesirable delay in packet transmission. For example, Nagle's algorithm may wait until a delayed ACK is received, before sending the packet, even though the transaction is complete. This delay may be particularly egregious for a transaction based protocol where multiple transactions may be sent in the same session. An example of such a protocol is persistent HTTP 1.1. This situation is documented in http://www.w3.org/Protocols/HTTP/Performance/Nagle.


Therefore, there is a need in the industry to improve how network packets are managed. Thus, it is with respect to these considerations and others that the present invention has been made.





BRIEF DESCRIPTION OF THE DRAWINGS

Non-limiting and non-exhaustive embodiments of the invention are described with reference to the following drawings. In the drawings, like reference numerals refer to like parts throughout the various figures unless otherwise specified.


For a better understanding of the invention, reference will be made to the following Detailed Description of the Invention, which is to be read in association with the accompanying drawings, wherein:



FIG. 1, illustrates one embodiment of an environment for practicing the invention;



FIG. 2 illustrates one embodiment of a network device that may be included in a system implementing the invention;



FIG. 3 illustrates one embodiment of a signal flow diagram showing an effect of inhibiting a concatenation of data into a packet;



FIG. 4 illustrates one embodiment of a signal flow diagram showing an effect of immediately sending an ACK based on detection of a write completion;



FIG. 5 illustrates a logical flow diagram generally showing one embodiment of a process for selectively enabling network packet concatenation based on a detection of a transaction boundary; and



FIG. 6 illustrates a logical flow diagram generally showing one embodiment of a process for immediately sending an ACK based on detection of a write completion indicator, in accordance with the invention.





DETAILED DESCRIPTION OF THE INVENTION

The invention now will be described more fully hereinafter with reference to the accompanying drawings, which form a part hereof, and which show, by way of illustration, specific exemplary embodiments by which the invention may be practiced. This invention may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the invention to those skilled in the art. Among other things, the invention may be embodied as methods or devices. Accordingly, the invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. The following detailed description is, therefore, not to be taken in a limiting sense.


Throughout the specification and claims, the following terms take the meanings explicitly associated herein, unless the context clearly dictates otherwise. The phrase “in one embodiment” as used herein does not necessarily refer to the same embodiment, though it may. As used herein, the term “or” is an inclusive “or” operator, and is equivalent to the term “and/or,” unless the context clearly dictates otherwise. The term “based on” is not exclusive and allows for being based on additional factors not described, unless the context clearly dictates otherwise. In addition, throughout the specification, the meaning of “a,” “an,” and “the” include plural references. The meaning of “in” includes “in” and “on.”


As used herein, a “transaction” is an application defined logically related grouping of network records at the application layer (OSI Layer 7). For example, a logically related grouping of network records at the application layer may be defined by the HyperText Transfer Protocol (HTTP), HTTP 1.1, TELNET, File Transfer Protocol (FTP), Simple Mail Transfer Protocol (SMTP), or the like. The logically related grouping of network records are embodied by data transmitted at the transport layer (OSI Layer 4) and/or the network layer (OSI Layer 3). For example, the data transmitted at these layers may include Internet Protocol (IP) packets, Transmission Control Protocol (TCP) packets, User Datagram Protocol (UDP), Real-time Transport Protocol (RTP) packets, or the like.


As used herein, a “transaction boundary” is an indication of a transition between one transaction and another transaction. As used herein, a “transaction aware device” (TAD) is any computing device that is arranged to use information associated with syntax and/or semantics of a layer 7 application protocol to detect a transaction.


As used herein, “packets” are the data transmitted at the network layer (OSI Layer 3) and/or the transport layer (OSI Layer 4) and the associated records realized at the application layer (OSI Layer 7).


Briefly stated, the invention is directed towards a system, apparatus, and method for selectively combining data into a packet to modify a number of packets transmitted over a network based on a detection of a transaction boundary. As such, awareness of the higher-level transaction boundaries (e.g. at an application layer (OSI Layer 7)) may be utilized to determine how data may be segmented at lower layers, such as at the transport layer or the network layer, and to provide a more optimal segmentation of data.


In one embodiment, a transaction boundary may be determined by detecting an indicator of an end of a transaction. To detect the end of the transaction, a length of a transaction included in a record and/or protocol header of a packet is received. A plurality of packets is also received. If a length of the plurality of packets equals the length of the transaction, then an indicator of the end of the transaction is provided. In another embodiment, the end of the transaction is based on an end of transaction record, such as an end of file (EOF) indicator within a packet. In one embodiment, an indicator of the end of the transaction may be received from another component, network device, application, or the like. For example, the other component, network device, application or the like may indicate the end of the transaction, a flush of a buffer of data to be sent, or the like, through an application programming interface (API), or the like.


In another embodiment, a transaction boundary may be determined by detecting an indicator of a beginning of a transaction. In one embodiment, the beginning of the transaction may be indicated by a flag within a packet. In one embodiment, an indicator of the beginning of the transaction may be received from another component, network device, application, or the like.


In one embodiment, if it is determined to concatenate the data, such concatenation may continue until an acknowledgement (ACK) is received, or a predetermined amount of data is concatenated in the packet, or a transaction boundary is detected. The ACK may include any record/packet that indicates a response to previously sent information. In one embodiment, if at least one of these conditions is satisfied, the packet may be enabled to be transmitted over the network. In one embodiment, data is concatenated until a timeout event occurs. In one embodiment, Nagle's algorithm, or an algorithm substantially adhering to Nagle's algorithm, may be used for concatenating data into the packet.


Upon detecting the transaction boundary, the invention may inhibit (e.g. disable) concatenation of data into a packet. Disabling concatenation may result in sending the data over the network virtually as soon as the data is available. Sending of the data virtually at once may also result in sending of a short packet. A short packet is a packet that may include less than a predetermined amount of data to be concatenated into the packet.


Concatenation may then be re-enabled. In one embodiment, concatenation may be re-enabled before a subsequent transaction begins. For example, concatenation may be re-enabled upon receipt of an ACK for a previous packet sent, upon detection of another transaction boundary, based on an occurrence of a timeout event, upon receipt of an indication to re-enable concatenation from another component, network device, application, or the like.


In an alternate embodiment, an ACK may be sent virtually immediately based on a write completion indicator included within a packet. Receipt of the ACK may also disable concatenation. In one embodiment, a write completion is indicated based on a variety of events, including when a current write buffer is full, at an end of a write operation, at an end of a transaction, upon closing an application, or the like. A write completion may be indicated by a push flag within a packet.


In one embodiment, the invention determines an amount of time to delay sending an ACK at least partly based on the write completion indicator. In one embodiment, a first packet is received. If a write completion indicator is detected in the first packet, the ACK may be sent virtually immediately. Otherwise, if a second packet is received or a timeout event occurs, the ACK may also be sent. Upon receipt of the ACK, another packet, which may include concatenated data, may be sent over the network.


In one embodiment, the present invention may be implemented on a TAD. The TAD may include a client, a proxy, a server, or the like.


Illustrative Operating Environment



FIG. 1 illustrates one embodiment of an environment in which the invention may operate. However, not all of these components may be required to practice the invention, and variations in the arrangement and type of the components may be made without departing from the spirit or scope of the invention.


As shown in the figure, system 100 includes client device 102, network 105, traffic management device (TMD) 106, and server devices 108-110. Client device 102 is in communication with TMD 106 and server device 110 through network 105. TMD 106 is in further communication with server devices 108-109. Although not shown, TMD 106 may be in communication with server devices 108-109 through a network infrastructure that is similar to network 105.


Generally, client device 102 may include virtually any computing device capable of connecting to another computing device to send and receive information, including web requests for information from a server, and the like. The set of such devices may include devices that typically connect using a wired communications medium such as personal computers, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, and the like. The set of such devices may also include devices that typically connect using a wireless communications medium such as cell phones, smart phones, radio frequency (RF) devices, infrared (IR) devices, integrated devices combining one or more of the preceding devices, or virtually any mobile device. Similarly, client device 102 may be any device that is capable of connecting using a wired or wireless communication medium such as a PDA, POCKET PC, wearable computer, and any other device that is equipped to communicate over a wired and/or wireless communication medium.


Client device 102 may further include a client application that is configured to manage various actions. Moreover, client device 102 may also include a web browser application, that is configured to enable an end-user to interact with other devices and applications over network 105.


Client device 102 may communicate with network 105 employing a variety of network interfaces and associated communication protocols. Client device 102 may, for example, use various dial-up mechanisms with a Serial Line IP (SLIP) protocol, Point-to-Point Protocol (PPP), and the like. As such, client device 102 may transfer content at a low transfer rate, with potentially high latencies. For example, client device 102 may transfer data at about 14.4 to about 46 kbps, or potentially more. In another embodiment, client device 102 may employ a higher-speed cable, Digital Subscriber Line (DSL) modem, Integrated Services Digital Network (ISDN) interface, ISDN terminal adapter, or the like. As such, client device 102 may be considered to transfer data using a high bandwidth interface varying from about 32 kbps to over about 622 Mbps, although such rates are highly variable, and may change with technology.


Network 105 is configured to couple client device 102, with other network devices, such as TMD 106, server device 110, or the like. Network 105 is enabled to employ any form of computer readable media for communicating information from one electronic device to another. In one embodiment, network 105 is the Internet, and may include local area networks (LANs), wide area networks (WANs), direct connections, such as through a universal serial bus (USB) port, other forms of computer-readable media, or any combination thereof. On an interconnected set of LANs, including those based on differing architectures and protocols, a router may act as a link between LANs, to enable messages to be sent from one to another. Also, communication links within LANs typically include twisted wire pair or coaxial cable, while communication links between networks may utilize analog telephone lines, full or fractional dedicated digital lines including T1, T2, T3, and T4, Integrated Services Digital Networks (ISDNs), Digital Subscriber Lines (DSLs), wireless links including satellite links, or other communications links known to those skilled in the art.


Network 105 may further employ a plurality of wireless access technologies including, but not limited to, 2nd (2G), 3rd (3G) generation radio access for cellular systems, Wireless-LAN, Wireless Router (WR) mesh, and the like. Access technologies such as 2G, 3G, and future access networks may enable wide area coverage for network devices, such as client device 102, and the like, with various degrees of mobility. For example, network 105 may enable a radio connection through a radio network access such as Global System for Mobil communication (GSM), General Packet Radio Services (GPRS), Enhanced Data GSM Environment (EDGE), Wideband Code Division. Multiple Access (WCDMA), and the like.


Furthermore, remote computers and other related electronic devices could be remotely connected to either LANs or WANs via a modern and temporary telephone link. In essence, network 105 includes any communication method by which information may travel between client device 102 and TMD 106.


Additionally, network 105 may include communication media that typically embodies computer-readable instructions, data structures, program modules, or other data in a modulated data signal such as a carrier wave, data signal, or other transport mechanism and includes any information delivery media. The terms “modulated data signal,” and “carrier-wave signal” includes a signal that has one or more of its characteristics set or changed in such a manner as to encode information, instructions, data, and the like, in the signal. By way of example, communication media includes wired media such as, but not limited to, twisted pair, coaxial cable, fiber optics, wave guides, and other wired media and wireless media such as, but not limited to, acoustic, RE, infrared, and other wireless media.


TMD 106 includes virtually any device that manages network traffic. Such devices include, for example, routers, proxies, transparent proxies, firewalls, load balancers, cache devices, application accelerators, devices that perform network address translation, any combination of the preceding devices, or the like, TMD 106 may control, for example, the flow of data packets delivered to or forwarded from an array of server devices, such as server devices 108-109. TMD 106 may direct a request for a resource to a particular server device based on network traffic, network topology, capacity of a server, content requested, and a host of other traffic distribution mechanisms. TMD 106 may receive data packets from and transmit data packets to the Internet, an intranet, or a local area network accessible through another network. TMD 106 may recognize packets that are part of the same communication, flow, and/or stream and may perform special processing on such packets, such as directing them to the same server device so that state information is maintained. TMD 106 also may support a wide variety of network applications such as Web browsing, email, telephony, streaming multimedia and other traffic that is sent in packets. The BIG-IP® family of traffic managers, by F5 Networks of Seattle, Wash., are examples of TMDs. In one embodiment, TMDs 106 may be integrated with one or more of servers 108-109, and provide content or services in addition to the TMD functions described herein.


TMD 106 may receive requests from client device 102. TMD 106 may select a server device from server devices 108-109 to forward the request. TMD 106 may employ any of a variety of criteria and mechanisms to select the server, including those mentioned above, load balancing mechanisms, and the like. TMD 106 may receive a response to the request and forward the response to client device 102.


In one embodiment, server devices may be geographically distributed from each other. In one embodiment, TMD 106 may make a decision as to which server device is best configured to respond to a request from client 102, based on whether the client 102 is connected to the network 105 with a high bandwidth connection. TMD 106 may then either forward a communication to the selected server device or cause the client request to be redirected to the selected server. HTTP redirection may be used to redirect the client request, in one embodiment.


TMD 106 may be implemented using one or more personal computers, server devices, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, radio frequency (RF) devices, infrared (IR) devices, integrated devices combining one or more of the preceding devices, and the like. Such devices may be implemented solely in hardware or in hardware and software. For example, such devices may include some application specific integrated circuits (ASICs) coupled to one or more microprocessors. The ASICs may be used to provide a high-speed switch fabric while the microprocessors may perform higher layer processing of packets. An embodiment of a network device that could be used as TMD 106 is network device 200 of FIG. 2, configured with appropriate software.


Servers 108-110 may include any computing device capable of communicating packets to another network device. Each packet may convey a piece of information. A packet may be sent for handshaking, i.e., to establish a connection or to acknowledge receipt of data. The packet may include information such as a request, a response, or the like. Generally, packets received by server devices 108-110 will be formatted according to TCP/IP, but they could also be formatted using another transport protocol, such as SCTP, X.25, NetBEUI, IPX/SPX, token ring, similar IPv4/6 protocols, and the like. Moreover, the packets may be communicated between server devices 108-110, TMD 106, and chant device 102 employing HTTP, HTTPS, and the like.


In one embodiment, server devices 108-110 are configured to operate as a website server. However, server devices 108-110 are not limited to web server devices, and may also operate a messaging server, a File Transfer Protocol (FTP) server, a database server, content server, and the like. Additionally, each of server devices 108-110 may be configured to perform a different operation. Thus, for example, back-end server device 108 may be configured as a messaging server, while back-end server device 109 is configured as a database server. Moreover, while server devices 108-110 may operate as other than a website, they may still be enabled to receive an HTTP communication.


Devices that may operate as server devices 108-110 include personal computers, desktop computers, multiprocessor systems, microprocessor-based or programmable consumer electronics, network PCs, server devices, and the like.


A network device', such as client device 102, TMD 106, or at least one of server devices 108-110 may determine a transaction boundary associated with a network connection. In one embodiment, the network device may be a transaction aware device (TAD). In one embodiment, TMD 106 may determine the transaction boundary by inspecting a flow of data packets delivered to or forwarded from an array of server devices, such as server devices 108-109. For example, TMD 106 may detect a length of a transaction, a flag, an EOF indicator, or the like, included in a packet of the flow of data packets.


In one embodiment, client device 102, TMD 106, and server devices 108-110 may determine the transaction boundary by receiving an indicator of the transaction boundary from another component, network device, application, or the like. For example, an application may indicate the transaction boundary through an application programming interface (API), or the like.


Based, in part, on a determination of the transaction boundary, a network device may selectively enable a concatenation of data into a packet to modify a number of packets transmitted, or to not concatenate the data and rather immediately send data over the network when it is available. In one embodiment, the network device may also receive an indication to re-enable concatenation from another component, another network device, another application, or the like. In another embodiment, a first network device may determine the transaction boundary and may provide the determined information to a second network device. Based, in part, on the received information, the second network device may selectively enable or disable a concatenation.


It is further noted that terms such as client and server device, as used herein, are functions within a device. As such, virtually any device may be configured to operate as a client device, a server device, or even include both a client and a server device function. Furthermore, where two or more peers are employed, any one of them may be designated as a client or as a server, and be configured to confirm to the teachings of the present invention.


Illustrative Network Device



FIG. 2 shows one embodiment of a network device, according to one embodiment of the invention. Network device 200 may include many more or less components than those shown. The components shown, however, are sufficient to disclose an illustrative embodiment for practicing the invention. Network device 200 may represent, for example, TMD 106, server devices 108-110, or even client device 102 of FIG. 1.


Network device 200 includes processing unit 212, video display adapter 214, and a mass memory, all in communication with each other via bus 222. The mass memory generally includes RAM 216, ROM 232, and one or more permanent mass storage devices, such as hard disk drive 228, tape drive, optical drive, and/or floppy disk drive. The mass memory stores operating system 220 for controlling the operation of network device 200. Operating system 220 may further include networking components 256. Network device 200 may also include concatenation manager (CM) 252.


As illustrated in FIG. 2, network device 200 also can communicate with the Internet, or some other communications network, such as network 105 in FIG. 1, via network interface unit 210, which is constructed for use with various communication protocols including the TCP/IP protocol. Network interface unit 210 is sometimes known as a transceiver, transceiving device, or network interface card (NIC).


The mass memory as described above illustrates another type of computer-readable media, namely computer storage media. Computer storage media may include volatile, nonvolatile, removable, and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage; magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by a computing device.


The mass memory also stores program code and data. One or more applications 250 are loaded into mass memory and run on operating system 220. Examples of application programs may include email programs, routing programs, schedulers, calendars, database programs, word processing programs, HTTP programs, traffic management programs, security programs, and so forth.


Network device 200 may also include an SMTP handler application for transmitting and receiving e-mail, an HTTP handler application for receiving and handing HTTP requests, and an HTTPS handler application for handling secure connections. The HTTPS handler application may initiate communication with an external application in a secure fashion. Moreover, network device 200 may further include applications that support virtually any secure connection, including TLS, TTLS, EAP, SSL, IPSec, and the like. Similarly, network device 200 may include applications that support a variety of tunneling mechanisms, such as VPN, PPP, L2TP, and so forth.


Network device 200 may also include input/output interface 224 for communicating with external devices, such as a mouse, keyboard, scanner, or other input devices not shown in FIG. 2. Likewise, network device 200 may further include additional mass storage facilities such as CD-ROM/DVD-ROM drive 226 and hard disk drive 228. Hard disk drive 228 may be utilized to store, among other things, application programs, databases, and the like.


In one embodiment, the network device 200 includes at least one Application Specific Integrated Circuit (ASIC) chip (not shown) coupled to bus 222, The ASIC chip can include logic that performs some of the actions of network device 200. For example, in one embodiment, the ASIC chip can perform a number of packet processing functions for incoming and/or outgoing packets. In one embodiment, the ASIC chip can perform at least a portion of the logic to enable the operation of CM 252.


In one embodiment, network device 200 can further include one or more field-programmable gate arrays (FPGA) (not shown), instead of, or in addition to, the ASIC chip. A number of functions of the network device can be performed by the ASIC chip, the FPGA, by CPU 212 with instructions stored in memory, or by any combination of the ASIC chip, FPGA, and CPU.


Networking components 256 may include, for example, various components to manage operations of an Open Systems Interconnection (OSI) network stack, including Internet Protocol (IP), TCP, UDP, SSL, HTTP, content encoding (e.g., content compression), and similar network related services. Networking components 256 may include a send-queue enabled to buffer messages and/or data before sending the messages over a network. Networking components 256 may utilize a transceiver, such as network interface unit 210, to send the messages over the network. Networking components 256 may be enabled to employ a concatenation of data into a packet to modify a number of packets and/or payload size of the packets transmitted over a network. Networking components 256 may provide a mechanism for enabling the concatenation, or enabling immediately sending data when it is available (e.g., absent of concatenation).


CM 252 may be configured to receive various network information from networking components 256. Based, in part, on the received information from networking components 256, CM 252 may determine an existence of a transaction boundary. The information may be gathered, for example, by networking components 256 based on received packets, sent packets, or the like. In another embodiment, a separate component (not shown) may be configured to determine the transaction boundary and to expose the transaction boundary information to CM 252. In one embodiment, an indicator of the transaction boundary may be received from another component, such as one of applications 250, another network device, or the like. For example, one of applications 250, or the like, may indicate the end of the transaction, a flush of a buffer of data to be sent, or the like, through an application programming interface (API), or the like.


CM 252 may enable the transaction boundary determination to be configurable. For example, CM 252 may provide an interface or the like which enables a user and/or another computing device to provide a set of application level protocols to be inspected. In one embodiment, application level protocols are defined at the application layer (OSI Layer 7). The type of application level protocol may be determined by the data included in a transferred packet and/or the port utilized by the network connection. Examples, of various types of application level protocols include, but are not limited to HTTP, HTTPS, SMTP, FTP, NNTP, or the like.


CM 252 may be enabled to selectively concatenate data into a packet based on the detection of the transaction boundary. CM 252 may selectively concatenate data received from networking components 256, applications 250, or the like. CM 252 may employ a process such as described in conjunction with FIG. 5.


In one embodiment, CM 252 may be enabled to send virtually immediately an ACK based on a write completion indicator included within a packet. CM 252 may detect the write completion indicator based on information received from networking components 256, or the like. CM 252 may employ a process such as described in conjunction with FIG. 6.


Although illustrated in FIG. 2 as distinct components, networking components 256 and CM 252 may be arranged, combined, or the like, in any of a variety of ways, without departing from the scope of the invention. For example networking components 256 and CM 252 may be configured to operate a single component. Moreover, networking components 256 and CM 252 may reside within operating system 220. Furthermore, in one embodiment, networking components 256 and CM 252 may reside on different devices, such as TMD 106 of FIG. 1, server devices 108-110, or client device 102. In one embodiment, CM 252 may be configured to receive various network information from networking components 256 through a network communication, or the like.


Illustrative Signal Flow



FIG. 3 illustrates one embodiment of a signal flow diagram showing an effect of inhibiting a concatenation of data into a packet. Signal flow 300 of FIG. 3 may represent a signal flow between client 102 and TMD 106 of FIG. 1. As shown, signal flow 300 illustrates how inhibiting the concatenation may minimize or virtually eliminate a delay caused by the interaction, of a concatenation algorithm, such as an algorithm substantially similar to Nagle's algorithm, and a Delayed ACK algorithm.


Signal flow 300 begins at time 302 where data is concatenated into a packet (e.g. packet) to modify a number of packets transmitted over a network, and the data is sent. For example, data may be concatenated into the packet until a predetermined amount of data is reached (e.g. packet is full). As shown, at time 302, the packet is full, and thus ready for sending. Processing then continues to time 304.


At time 304, the packet is received. In one embodiment, a Delayed ACK algorithm may operate to delay sending an ACK for the received packet until after a second data packet is received or after a timeout event occurs. In one embodiment, the timeout event occurs after about 200 msecs. Processing then continues to time 308.


At time 308, another packet (e.g. short packet) is filled with data and a transaction boundary is detected. Upon detecting the transaction boundary, the invention may inhibit (e.g. disable) concatenation of data into a packet. Disabling concatenation may result in sending the data over the network virtually as soon as the data is available. As shown, sending of the data may also result in sending of a short packet. Processing then continues to time 310.


At time 310, the timeout event occurs and the delayed ACK is sent. In one embodiment, the delayed ACK is sent in accordance with the Delayed ACK algorithm. The delayed ACK may be a particular type of packet, a packet including a particular type of data, protocol, and/or record header, or the like. Processing then continues to time 312.


At time 312, the delayed ACK is received, and concatenation may be re-enabled upon receipt of the ACK. In another embodiment, concatenation may be re-enabled upon detection of another transaction boundary, based on an occurrence of a timeout event, upon receipt of an indication to re-enable concatenation from another component, network device, application, or the like. In another embodiment, concatenation may occur after time 308. Processing then continues to time 314.


At time 314, the short packet is received and processed. For example, the short packet may be utilized to complete a transaction. Processing may continue to another time step for further processing.



FIG. 4 illustrates one embodiment of a signal flow diagram showing an effect of immediately sending an ACK based on detection of a write completion. Signal flow 400 of FIG. 4 may represent a signal flow between one of servers 108-109 and TMD 106 of FIG. 1. As shown, signal flow 400 illustrates how sending the immediate ACK may minimize a delay caused by the interaction of a concatenation algorithm, such as an algorithm substantially similar to Nagle's algorithm, and a Delayed ACK algorithm.


Signal flow 400 begins at time 302 where data has been concatenated into a packet (e.g. full packet) to modify a number of packets transmitted over a network, and the data is sent. For example, data may be concatenated into the packet until a predetermined amount of data is reached. As shown, the packet is full, and thus ready for sending. Also as shown, the full packet includes a write completion indicator. In one embodiment, the write completion is indicated based on a variety of events, including when a current write buffer is full, at an end of a write operation, at an end of a transaction, upon closing an application, or the like. The write completion may be also indicated by a push flag within the packet. Processing then continues to time 304.


At time 304, the packet is received. In one embodiment, the invention may determine an amount of time to delay sending an ACK at least partly based on the write completion indicator included within the packet received. In one embodiment, if a write completion indicator is not detected, a Delayed ACK algorithm may operate to delay sending an ACK for the received packet until after a second data packet is received or after a timeout event occurs. As shown, the write completion indicator is detected in the packet, and the ACK is sent virtually immediately. Processing then continues to time 308.


At time 308, another packet (e.g. short packet) is filled with data. A concatenation algorithm, substantially similar to Nagle's algorithm may continue concatenation until an acknowledgement (ACK) is received, or a predetermined amount of data is concatenated in the packet. As shown, because the predetermined amount of data has not been sent and an ACK has not been received, the short packet is not sent at time 308. Processing then continues to time 402.


At time 402, the immediately sent ACK is received, thereby enabling the sending of the short packet in accordance with an algorithm substantially similar to Nagle's algorithm. As shown, there may be a delay between time 308 and time 402. However, in one embodiment, this delay may be less then another delay in the absence of an immediately sent ACK (e.g., when a delayed ACK is sent). The short packet is then received for further processing at time 404. Processing may continue to another time step for further processing.


Generalized Operation


The operation of certain aspects of the invention will now be described with respect to FIGS. 5-6. FIG. 5 illustrates a logical flow diagram generally showing one embodiment of a process for selectively enabling network packet concatenation based on a detection of a transaction boundary. Process 500 of FIG. 5 may be implemented, for example, within CM 252 of FIG. 2.


Process 500 begins, after a start block, at block 502, where a data is received. In one embodiment, the data may be included in a received packet. The packet may be received from a network device, a calling process, a network component, such as networking component 256 of FIG. 2, or the like. A packet's predetermined size (packet buffer) may be as large as an MSS. A packet may include data up to a predetermined amount of data. Processing then continues to decision block 504.


At decision block 504, a determination is made whether a buffer for a packet to be sent is full. In one embodiment, it is determined whether the packet to be sent has accumulated a predetermined amount of data. If the packet to be sent has accumulated the predetermined amount of data then processing continues to block 514. Otherwise, processing continues to decision block 506. In one embodiment (not shown), processing may also continue to block 514 if a timeout event occurs.


At decision block 504, a determination is made whether an ACK has been received. An ACK may be received for a previously sent packet. In one embodiment, the sending of the ACK may have been delayed based on a Delay ACK algorithm. An ACK may be a particular type of packet, a packet including a particular type of data, protocol, and/or record header, or the like. If the ACK has been received, then processing continues to block 514. Otherwise, processing continues to decision block 508.


At decision bock 508, a determination is made whether data should be combined, concatenated, and/or coalesced into a packet based on a detection of a transaction boundary. In one embodiment, the transaction boundary may be determined by detecting an indicator of an end of a transaction. To detect the end of the transaction, a length of a transaction included in a protocol and/or record header of a packet may be received. In one embodiment, the length of the transaction may be included in the data and/or packet received at block 502. After determining the length of the transaction, a plurality of packets is received. For example, the plurality of packets may be associated to the data and/or packet received at block 502. If a length of the plurality of packets equals the length of the transaction, an indicator of the end of the transaction is provided, thereby indicating that the transaction boundary is detected.


In another embodiment, the end of the transaction is based on an end of file (EOF) indicator within a packet. For example, if the EOF indicator is included within the data and/or packet received at block 502, then the transaction boundary is detected. In one embodiment, an indicator of the end of the transaction may be received from another component, network device, application, or the like.


In another embodiment, a transaction boundary may be determined by detecting an indicator of a beginning of a transaction. In one embodiment, the beginning of the transaction may be indicated by a flag within the data and/or packet received at block 502. In one embodiment, an indicator of the beginning of the transaction may be received from another component, network device, application, or the like.


If it is determined that the transaction boundary is detected, then processing continues to block 514. Otherwise, processing continues to block 510.


At block 510, data is concatenated into the packet to be sent. For example, data may be concatenated into the packet until a predetermined amount of data is reached. In one embodiment, a data of a first packet is concatenated with another data. The other data may be provided by a second packet, a networking component, another component, a network API call, or the like. In one embodiment, the concatenated data is stored in the first packet. In another embodiment, data of the first packet is combined and/or coalesced with data of the second packet and stored in the first packet. The combined and/or coalesced result may be further processed by compressing, encrypting, encoding, interleaving the result, or the like. Therefore, concatenation, combination, and/or coalescing of the data further modifies the number of packets and/or a network packet's data payload size sent over the network. In one embodiment, data of a subsequent received packet may be buffered into the packet until the packet buffer is full. Processing then loops back to block 502.


In an alternate embodiment, at block 510, data is instead coalesced into the packet and processing may wait for a predetermined time period. For example, the data may be coalesced until a timeout even occurs, or the like, and then subsequently sent over the network. In one embodiment, the time period may be selectively modified by selectively reducing the time period or selectively extending the time period, or the like. Processing then loops back to block 502.


At block 514, the packet is sent. The sending of the packet may differ depending on whether the buffer is full, an ACK is received, a timeout event occurs, or whether the transaction boundary is detected. For example, if the packet buffer is full, or the transaction boundary is detected, or a timeout event occurs, then the packet may be sent as is, and the received data may be enabled for subsequent processing at block 502. If the packet buffer is not full, then extra data may also be padded into the packet to be sent. Processing then continues to block 514.


At block 514, and in one embodiment, concatenation is enabled. In one embodiment, concatenation may be enabled by enabling an operating system to employ the present invention. In one embodiment, concatenation may be re-enabled before a subsequent transaction begins. For example, concatenation may be re-enabled upon receipt of an ACK for a previous packet sent, upon detection of another transaction boundary, based on an occurrence of a timeout event, upon receipt of an indication to re-enable concatenation from another component, network device, application, or the like. Processing then returns to a calling process for further processing.


In an alternate embodiment, block 510 may occur between block 502 and decision block 504. In this embodiment, data may be concatenated into the packet to be sent as soon as it is available.



FIG. 6 illustrates a logical flow diagram generally showing one embodiment of a process for immediately sending an ACK based on detection of a write completion indicator, in accordance with the invention. Process 600 of FIG. 6 may be implemented, for example, within CM 252 of FIG. 2.


Process 600 begins, after a start block, at block 602, where a first packet is received. The first packet may be received from a network device, a calling process, a network component, such as networking component 256 of FIG. 2, or the like. Processing then continues to decision block 604.


At decision block 604, it is determined whether the first packet includes a write completion indicator. In one embodiment, a write completion is indicated based on a variety of events, including when a current write buffer is full, at an end of a write operation, at an end of a transaction, upon closing an application, or the like. In one embodiment, a write completion may be indicated by a push flag within the first packet. In one embodiment, the invention determines an amount of time to delay sending an ACK at least partly based on the write completion indicator. If it is determined that the packet includes a write completion indicator, processing then continues to block 606. Otherwise, processing continues to block 610.


At block 606, processing waits for a receipt of a second packet or for a timeout event to occur. If a second packet is received or a timeout event occurs, processing continues to block 610. At block 610, the ACK may be sent. In one embodiment, the ACK may be sent to the sender of the first and/or second packet.


In one embodiment (not shown), receipt of the ACK may disable concatenation. Receipt of the ACK may also enable another packet, which may include concatenated data, to be sent over the network.


It will be understood that each block of the flowchart illustration, and combinations of blocks in the flowchart illustration, can be implemented by computer program instructions. These program instructions may be provided to a processor to produce a machine, such that the instructions, which execute on the processor, create means for implementing the actions specified in the flowchart block or blocks. The computer program instructions may be executed by a processor to cause a series of operational steps to be performed by the processor to produce a computer implemented process such that the instructions, which execute on the processor to provide steps for implementing the actions specified in the flowchart block or blocks.


Accordingly, blocks of the flowchart illustration support combinations of means for performing the specified actions, combinations of steps for performing the specified actions and program instruction means for performing the specified actions. It will also be understood that each block of the flowchart illustration, and combinations of blocks in the flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified actions or steps, or combinations of special purpose hardware and computer instructions.


The above specification, examples, and data provide a complete description of the manufacture and use of the composition of the invention. Since many embodiments of the invention can be made without departing from the spirit and scope of the invention, the invention resides in the claims hereinafter appended.

Claims
  • 1. A network device, comprising: memory to store data; andone or more processors operative to perform actions, comprising: receiving a packet of data that is a concatenation of data from at least a first packet of data and a second packet of data, the concatenation being based at least on a detection of an application layer transaction boundary;selectively delaying sending an acknowledgement (ACK) in response to the received packet;in response to receiving a short packet of data that is less than a determined amount of data to be concatenated, sending a delayed ACK that re-enables concatenation so that a subsequently received packet of data is to include concatenated data;in response to receiving a write completion indicator, sending an undelayed ACK; andin response to sending the undelayed ACK, receiving an other short packet.
  • 2. The network device of claim 1, wherein the actions, further comprising: receiving an other packet that is absent concatenation of data, the concatenation being inhibited based on a detection of a transaction boundary.
  • 3. The network device of claim 1, wherein the detection of the application layer transaction boundary that is determined based on at least one of an indicator of an end of a transaction or a beginning of a transaction.
  • 4. The network device of claim 1, wherein selectively delaying sending of an ACK further comprises delaying sending of the delayed ACK until an other packet of concatenated data is received, or until a timeout event is detected.
  • 5. The network device of claim 1, wherein the receiving the write completion indicator further comprises receiving the write completion indicator within a selectively concatenated packet of data.
  • 6. The network device of claim 1, wherein the write completion indicator indicates at least one of an end of a write operation, an end of a transaction, or a closing of an application.
  • 7. A processor based method, comprising: receiving a packet having data that is a concatenation of data from at least a first packet of data and a second packet of data, the concatenation being based at least on a detection of an application layer transaction boundary;selectively delaying sending an acknowledgement (ACK) in response to the received packet;in response to receiving a short packet of data that is less than a determined amount of data to be concatenated, sending a delayed acknowledgement (ACK) that re-enables concatenation so that a subsequently received packet of data includes concatenated data;in response to receiving a write completion indicator, sending an undelayed ACK; andin response to sending the undelayed ACK, receiving an other short packet.
  • 8. The processor based method of claim 7, further comprising: receiving an other packet that is a concatenation of data, after receiving un-concatenated packet data, the re-enabling of concatenation being based on a timeout event or detection of an other transaction boundary.
  • 9. The processor based method of claim 7, wherein selectively delaying sending of an ACK further comprises delaying sending of the delayed ACK until an other packet of concatenated data is received, or until a timeout event is detected.
  • 10. The processor based method of claim 7, further comprising: receiving an other packet that is absent concatenation of data, the concatenation being inhibited based on a detection of a transaction boundary.
  • 11. The processor based method of claim 7, wherein the detection of the application layer transaction boundary comprises at least one of an indicator of an end of a transaction or a beginning of a transaction.
  • 12. A system, comprising: a processor based sender that performs actions, including: selectively concatenating data of a first packet of data with a subsequent second packet of data based, in part, on a detection of an application layer transaction boundary; anda processor based receiver that performs actions, including: receiving the packet of concatenated data;selectively delaying sending an acknowledgement (ACK) in response to the received packet;in response to receiving a short packet of data that is less than a determined amount of data to be concatenated, sending a delayed ACK that re-enables concatenation so that a subsequently received packet of data includes concatenated data;in response to receiving a write completion indicator, sending an undelayed ACK; andin response to sending the undelayed ACK, receiving an other short packet.
  • 13. The system of claim 12, wherein the detection of the application layer transaction boundary comprises at least one of an indicator of an end of a transaction or a beginning of a transaction.
  • 14. The system of claim 12, wherein the processor based sender performs further actions, including: when an end of a transaction is detected, transmitting data in an other packet without concatenating data from a plurality of packets.
  • 15. The system of claim 12, wherein detection of the application layer transaction boundary is based on at least one of a count of data packets, an end of file indicator, or receiving an indicator of a presence of the transaction boundary from an other processor.
  • 16. The system of claim 12, wherein detecting the transaction boundary further comprises: receiving a length of a transaction within a protocol header of a packet; andwhen a length of a plurality of received packets by the sender equals the length of the transaction, detecting the transaction boundary.
  • 17. The system of claim 12, wherein detecting the transaction boundary further comprises receiving an indicator of an end of a transaction from an application.
  • 18. The system of claim 12, wherein selectively concatenating data continues until an ACK is received or a predetermined amount of data is concatenated, or the transaction boundary is detected.
CROSS-REFERENCE TO RELATED APPLICATIONS

The present application is a Continuation patent application of U.S. patent application Ser. No. 11/366,367 entitled “Selectively Enabling Packet Concatenation Based On Transaction Boundaries,” filed on Mar. 2, 2006 which also claims priority from provisional application Ser. No. 60/764,005 entitled “Selectively Enabling Packet Concatenation Based On Transaction Boundaries,” filed on Feb. 1, 2006, the benefit of the earlier filing dates of which are hereby claimed under 35 U.S.C. §119 (e) and §120 and which are each further incorporated herein by reference in their entirety.

US Referenced Citations (249)
Number Name Date Kind
3689872 Sieracki Sep 1972 A
3768726 Hale et al. Oct 1973 A
4021782 Hoerning May 1977 A
4054951 Jackson et al. Oct 1977 A
4316222 Subramaniam et al. Feb 1982 A
4386416 Giltner et al. May 1983 A
4593324 Ohkubo et al. Jun 1986 A
4626829 Hauck Dec 1986 A
4701745 Waterworth et al. Oct 1987 A
4862167 Copeland, III Aug 1989 A
4876541 Storer Oct 1989 A
4890282 Lambert et al. Dec 1989 A
4897717 Hamilton et al. Jan 1990 A
4906991 Fiala et al. Mar 1990 A
4971407 Hoffman Nov 1990 A
4988998 O'Brien Jan 1991 A
5003307 Whiting et al. Mar 1991 A
5016009 Whiting et al. May 1991 A
5109433 Notenboom Apr 1992 A
5126739 Whiting et al. Jun 1992 A
5146221 Whiting et al. Sep 1992 A
5150430 Chu Sep 1992 A
5155484 Chambers, IV Oct 1992 A
5167034 MacLean, Jr. et al. Nov 1992 A
5212742 Normile et al. May 1993 A
5249053 Jain Sep 1993 A
5280600 Van Maren et al. Jan 1994 A
5293388 Monroe et al. Mar 1994 A
5319638 Lin Jun 1994 A
5341440 Earl et al. Aug 1994 A
5367629 Chu et al. Nov 1994 A
5379036 Storer Jan 1995 A
5410671 Elgamal et al. Apr 1995 A
5414425 Whiting et al. May 1995 A
5463390 Whiting et al. Oct 1995 A
5479587 Campbell et al. Dec 1995 A
5488364 Cole Jan 1996 A
5504842 Gentile Apr 1996 A
5506944 Gentile Apr 1996 A
5539865 Gentile Jul 1996 A
5542031 Douglass et al. Jul 1996 A
5544290 Gentile Aug 1996 A
5546395 Sharma et al. Aug 1996 A
5546475 Bolle et al. Aug 1996 A
5553160 Dawson Sep 1996 A
5553242 Russell et al. Sep 1996 A
5610905 Murthy et al. Mar 1997 A
5615287 Fu et al. Mar 1997 A
5638498 Tyler et al. Jun 1997 A
5768445 Troeller et al. Jun 1998 A
5768525 Kralowetz et al. Jun 1998 A
5774715 Madany et al. Jun 1998 A
5805932 Kawashima et al. Sep 1998 A
5825890 Elgamal et al. Oct 1998 A
5850565 Wightman Dec 1998 A
5874907 Craft Feb 1999 A
5884269 Cellier et al. Mar 1999 A
5892847 Johnson Apr 1999 A
5898837 Guttman et al. Apr 1999 A
5941988 Bhagwat et al. Aug 1999 A
5951623 Reynar et al. Sep 1999 A
5991515 Fall et al. Nov 1999 A
6023722 Colyer Feb 2000 A
6052785 Lin et al. Apr 2000 A
6061454 Malik et al. May 2000 A
6070179 Craft May 2000 A
6182139 Brendel Jan 2001 B1
6223287 Douglas et al. Apr 2001 B1
6226687 Harriman et al. May 2001 B1
6253226 Chidambaran et al. Jun 2001 B1
6298380 Coile et al. Oct 2001 B1
6367009 Davis et al. Apr 2002 B1
6370584 Bestavros et al. Apr 2002 B1
6411986 Susai et al. Jun 2002 B1
6434618 Cohen et al. Aug 2002 B1
6584567 Bellwood et al. Jun 2003 B1
6590588 Lincke et al. Jul 2003 B2
6625150 Yu Sep 2003 B1
6629163 Balassanian Sep 2003 B1
6633835 Moran et al. Oct 2003 B1
6643259 Borella et al. Nov 2003 B1
6643701 Aziz et al. Nov 2003 B1
6650640 Muller et al. Nov 2003 B1
6654701 Hatley Nov 2003 B2
6665725 Dietz et al. Dec 2003 B1
6668327 Prabandham et al. Dec 2003 B1
6674717 Duong-van et al. Jan 2004 B1
6681327 Jardin Jan 2004 B1
6697363 Carr Feb 2004 B1
6718388 Yarborough et al. Apr 2004 B1
6754662 Li Jun 2004 B1
6754831 Brownell Jun 2004 B2
6760782 Swales Jul 2004 B1
6763384 Gupta et al. Jul 2004 B1
6766373 Beadle et al. Jul 2004 B1
6768716 Abel et al. Jul 2004 B1
6768726 Dorenbosch et al. Jul 2004 B2
6789203 Belissent Sep 2004 B1
6792461 Hericourt Sep 2004 B1
6799276 Belissent Sep 2004 B1
6829238 Tokuyo et al. Dec 2004 B2
6831923 Laor et al. Dec 2004 B1
6842462 Ramjee et al. Jan 2005 B1
6854117 Roberts Feb 2005 B1
6895443 Aiken May 2005 B2
6928082 Liu et al. Aug 2005 B2
6934260 Kanuri Aug 2005 B1
6934848 King et al. Aug 2005 B1
6950434 Viswanath et al. Sep 2005 B1
6954462 Chu et al. Oct 2005 B1
6954780 Susai et al. Oct 2005 B2
6957272 Tallegas et al. Oct 2005 B2
6990592 Richmond et al. Jan 2006 B2
7013342 Riddle Mar 2006 B2
7023804 Younes et al. Apr 2006 B1
7047315 Srivastava May 2006 B1
7051330 Kaler et al. May 2006 B1
7069438 Balabine et al. Jun 2006 B2
7103045 Lavigne et al. Sep 2006 B2
7113993 Cappiello et al. Sep 2006 B1
7139792 Mishra et al. Nov 2006 B1
7139811 Lev Ran et al. Nov 2006 B2
7177311 Hussain et al. Feb 2007 B1
7181493 English et al. Feb 2007 B2
7215637 Ferguson et al. May 2007 B1
7231445 Aweya et al. Jun 2007 B1
7231657 Honarvar et al. Jun 2007 B2
7254639 Siegel et al. Aug 2007 B1
7266613 Brown et al. Sep 2007 B1
7280471 Rajagopal et al. Oct 2007 B2
7287077 Haugh et al. Oct 2007 B2
7313627 Noble Dec 2007 B1
7315513 McCann et al. Jan 2008 B2
7321926 Zhang et al. Jan 2008 B1
7324447 Morford Jan 2008 B1
7350229 Lander Mar 2008 B1
7359974 Quinn et al. Apr 2008 B1
7362762 Williams, Jr. et al. Apr 2008 B2
7421515 Marovich Sep 2008 B2
7463637 Bou-Diab et al. Dec 2008 B2
7484011 Agasaveeran et al. Jan 2009 B1
7496750 Kumar et al. Feb 2009 B2
7571313 Messerges et al. Aug 2009 B2
7586851 Panigrahy et al. Sep 2009 B2
7596137 Bennett Sep 2009 B2
7599283 Varier et al. Oct 2009 B1
7619983 Panigrahy Nov 2009 B2
7623468 Panigrahy et al. Nov 2009 B2
7624436 Balakrishnan et al. Nov 2009 B2
7711857 Balassanian May 2010 B2
7721084 Salminen et al. May 2010 B2
8009566 Zuk et al. Aug 2011 B2
20010032254 Hawkins Oct 2001 A1
20010049741 Skene et al. Dec 2001 A1
20020025036 Sato Feb 2002 A1
20020054567 Fan May 2002 A1
20020055980 Goddard May 2002 A1
20020059428 Susai et al. May 2002 A1
20020073223 Darnell et al. Jun 2002 A1
20020085587 Mascolo Jul 2002 A1
20020103663 Bankier et al. Aug 2002 A1
20020107903 Richter et al. Aug 2002 A1
20020126671 Ellis et al. Sep 2002 A1
20020133586 Shanklin et al. Sep 2002 A1
20020141393 Eriksson et al. Oct 2002 A1
20020147916 Strongin et al. Oct 2002 A1
20020169980 Brownell Nov 2002 A1
20030018827 Guthrie et al. Jan 2003 A1
20030043755 Mitchell Mar 2003 A1
20030050974 Mani-Meitav et al. Mar 2003 A1
20030061256 Mathews et al. Mar 2003 A1
20030069973 Ganesan et al. Apr 2003 A1
20030097460 Higashiyama et al. May 2003 A1
20030097484 Bahl May 2003 A1
20030103466 McCann et al. Jun 2003 A1
20030110230 Holdsworth et al. Jun 2003 A1
20030112755 McDysan Jun 2003 A1
20030123447 Smith Jul 2003 A1
20030126029 Dastidar et al. Jul 2003 A1
20030139183 Rantalainen Jul 2003 A1
20030154406 Honarvar et al. Aug 2003 A1
20030169859 Strathmeyer et al. Sep 2003 A1
20030177267 Orava et al. Sep 2003 A1
20030179738 Diachina et al. Sep 2003 A1
20030214948 Jin et al. Nov 2003 A1
20030217171 Von Stuermer et al. Nov 2003 A1
20030223413 Guerrero Dec 2003 A1
20040004975 Shin et al. Jan 2004 A1
20040006638 Oberlander et al. Jan 2004 A1
20040008629 Rajagopal et al. Jan 2004 A1
20040008664 Takahashi et al. Jan 2004 A1
20040008728 Lee Jan 2004 A1
20040010473 Hsu et al. Jan 2004 A1
20040015686 Connor et al. Jan 2004 A1
20040037322 Sukonik et al. Feb 2004 A1
20040052257 Abdo et al. Mar 2004 A1
20040088585 Kaler et al. May 2004 A1
20040095934 Cheng et al. May 2004 A1
20040148425 Haumont et al. Jul 2004 A1
20040193513 Pruss et al. Sep 2004 A1
20040218603 Lee et al. Nov 2004 A1
20040225810 Hiratsuka Nov 2004 A1
20040240446 Compton Dec 2004 A1
20050060295 Gould et al. Mar 2005 A1
20050063303 Samuels et al. Mar 2005 A1
20050063307 Samuels et al. Mar 2005 A1
20050074007 Samuels et al. Apr 2005 A1
20050108420 Brown et al. May 2005 A1
20050114700 Barrie et al. May 2005 A1
20050132060 Mo et al. Jun 2005 A1
20050135436 Nigam et al. Jun 2005 A1
20050144278 Atamaniouk Jun 2005 A1
20050171930 Arning et al. Aug 2005 A1
20050187979 Christensen et al. Aug 2005 A1
20050203988 Nollet et al. Sep 2005 A1
20050216555 English et al. Sep 2005 A1
20050238010 Panigrahy et al. Oct 2005 A1
20050238011 Panigrahy Oct 2005 A1
20050238012 Panigrahy et al. Oct 2005 A1
20050238022 Panigrahy Oct 2005 A1
20050265235 Accapadi et al. Dec 2005 A1
20050271048 Casey Dec 2005 A1
20060005008 Kao Jan 2006 A1
20060020598 Shoolman et al. Jan 2006 A1
20060026290 Pulito et al. Feb 2006 A1
20060029062 Rao et al. Feb 2006 A1
20060029063 Rao et al. Feb 2006 A1
20060029064 Rao et al. Feb 2006 A1
20060036747 Galvin et al. Feb 2006 A1
20060037071 Rao et al. Feb 2006 A1
20060056443 Tao et al. Mar 2006 A1
20060123226 Kumar et al. Jun 2006 A1
20060153228 Stahl et al. Jul 2006 A1
20060233166 Bou-Diab et al. Oct 2006 A1
20060235973 McBride et al. Oct 2006 A1
20060242300 Yumoto et al. Oct 2006 A1
20060265689 Kuznetsov et al. Nov 2006 A1
20060294366 Nadalin et al. Dec 2006 A1
20070094336 Pearson Apr 2007 A1
20070121615 Weill et al. May 2007 A1
20070153798 Krstulich Jul 2007 A1
20070156919 Potti et al. Jul 2007 A1
20080034127 Nishio Feb 2008 A1
20080253366 Zuk et al. Oct 2008 A1
20080291910 Tadimeti et al. Nov 2008 A1
20090063852 Messerges et al. Mar 2009 A1
20090077618 Pearce et al. Mar 2009 A1
20090295905 Civanlar et al. Dec 2009 A1
20090327113 Lee et al. Dec 2009 A1
Non-Patent Literature Citations (117)
Entry
Official Communication for U.S. Appl. No. 11/258,551 mailed Aug. 15, 2012.
Official Communication for U.S. Appl. No. 11/243,844 mailed Nov. 26, 2012.
Official Communication for U.S. Appl. No. 11/258,551 mailed Dec. 6, 2012.
Acharya et al., “Scalable Web Request Routing with MPLS,” IBM Research Report, IBM Research Division, Dec. 5, 2001, 15 pages.
Official Communication for U.S. Appl. No. 11/258,551 mailed Mar. 3, 2009.
Official Communication for U.S. Appl. No. 11/258,551 mailed Jan. 4, 2010.
Official Communication for U.S. Appl. No. 11/258,551 mailed Aug. 3, 2010.
Official Communication for U.S. Appl. No. 11/258,551 mailed Mar. 31, 2011.
Official Communication for U.S. Appl. No. 12/199,768 mailed Jun. 18, 2010.
Official Communication for U.S. Appl. No. 12/199,768 mailed Nov. 22, 2010.
Official Communication for U.S. Appl. No. 12/199,768 mailed Feb. 1, 2011.
Official Communication for U.S. Appl. No. 12/475,307 mailed Feb. 10, 2011.
Official Communication for U.S. Appl. No. 12/475,307 mailed Jul. 22, 2011.
Official Communication for U.S. Appl. No. 12/475,307 mailed Sep. 29, 2011.
Official Communication for U.S. Appl. No. 11/366,367 mailed May 18, 2012.
Official Communication for U.S. Appl. No. 10/719,375 mailed Jun. 18, 2008.
Official Communication for U.S. Appl. No, 10/719,375 mailed Apr. 24, 2009.
Official Communication for U.S. Appl. No. 10/719,375 mailed Nov. 13, 2009.
Official Communication for U.S. Appl. No. 10/719,375 mailed May 25, 2010.
Official Communication for U.S. Appl. No. 10/719,375 mailed Dec. 7, 2010.
Official Communication for U.S. Appl. No. 10/719,375 mailed Jun. 16, 2011.
Official Communication for U.S. Appl. No. 10/719,375 mailed Jan. 25, 2012.
Official Communication for U.S. Appl. No. 11/243,844 mailed Nov. 8, 2011.
Official Communication for U.S. Appl. No. 11/243,844 mailed Jun. 14, 2012.
Oracle Communication and Mobility Server, Aug. 2007, http://www.oracle.com/technology/products/ocms/otn—front.html, accessed May 15, 2008, 108 pages.
Session initiation Protocol, http://en.wikipedia.org/w/index.php?title=Session—Initiation—Protoc..., accessed May 14, 2008, 5 pages.
IP Multimedia Subsystems, http://en.wikipedia.org/w/index.php?title=IP—Multimedia—Subsyst..., accessed May 15, 2008, 8 pages.
F5 Networks Delivers Blistering Application Traffic Management Performance and Unmatched Intelligence via New Packet Velocity ASIC and BI-IP Platforms, F5 Networks, Inc. Press Release dated Oct. 21, 2002, 3 pages.
Secure and Optimize Oracle 11i E-Business Suite with F5 Solutions, F5 Application Ready Network Guide, Oracle E-Business Suite 11i, Aug. 2007, 2 pages.
Using the Universal Inspection Engine, Manual Chapter: BIG-IP Solutions Guide v4.6.2. Using the Universal Inspection Engine, 2002, 8 pages.
Hewitt, J. R. et al., “Securities Practice and Electronic Technology,” Corporate Securities Series, New York: Law Journal Seminars-Press, 1998, title page, bibliography page, pp. 4.29-4.30.
Reardon, M., “A Smarter Session Switch: Arrowpoint's CS Session Switches Boast the Brains Needed for E-Commerce,” Data Communications, Jan. 1999, title page, pp. 3, 5, 18.
Freier, A. et al., “The SSL Protocol Version 3.0,” IETF, Internet Draft, Nov. 18, 1996, 60 pages.
Housley, R. et al., “Internet X.509 Public Key infrastructure Certificate and CRL Profile,” RFC 2459, Jan. 1999, 115 pages.
Enger, R. et al., “FYI on a Network Management Tool Catalog: Tools for Monitoring and Debugging TCP/IP Internets and Interconnected Devices,” RFC 1470, Jun. 1993, 62 pages.
“Consistent Hashing,” Wikipedia—the free encyclopedia, http://en.wikipedia.org/w/index.php?title=Consistent—hashing&print..., accessed Jul. 25, 2008, 1 page.
“Control Plane,” Wikipedia—the free encyclopedia, http://en.wikipedia.org/w/index.php?title=Control—plane&printable=yes, accessed Jul. 31, 2008, 4 pages.
“editcap—Edit and/or translate the format of capture files,” ethereal.com, www.ethereal.com/docs/man-pages/editcap.1.html, accessed Apr. 15, 2004, 3 pages.
“ethereal—Interactively browse network traffic,” ethereal.com, www.ethereal.com/docs/man-pages/ethereal.1.html, accessed Apr. 15, 2004, 29 pages.
“FAQ: Network Intrusion Detection Systems,” robertgraham.com, Mar. 21, 2000, www.robertgraham.com/pubs/network-intrusion-detection.html, accessed Apr. 15, 2004.
“Forwarding Plane,” Wikipedia—the free encyclopedia, http.//en.wikipedia.org/w/index.php?title=Forwarding—plane&printa..., accessed Jul. 31, 2008.
“Network Management,” Wikipedia—the free encyclopedia, http://en.wikipedia.org/w/index/php?title=Network—management&pr..., accessed Jul. 31, 2008, 3 pages.
“Network Sniffer,” linuxmigration.com, www.linuxmigration.com/quickref/admin/ethereal.html, accessed Apr. 15, 2004, 4 pages.
“Telecommunications Network,” Wikipedia—the free encyclopedia, http://en.wikipedia.org/w/index.php?title=Telecommunications—net... accessed Jul. 31, 2008, 2 pages.
“tethereal—Dump and analyze network traffic,” ethereal.com, www.ethereal.com/docs/man-pages/tethereal.1.html, accessed Apnl 15, 2004, 11 pages.
Berners-Lee, T. et al., “Uniform Resource Identifiers {URI}: Generic Syntax,” IETF RFC 2396, Aug. 1998.
Hinden, R. et al., “Format for Literal IPv6 Addresses in URL's” IETF RFC 2732, Dec. 1999.
Valloppillil, V. et al, “Cache Array Routing Protocol v1.0,” Feb. 1998, http://icp.ircache.net/carp.txt, accessed Jul. 25, 2008, 7 pages.
Nielsen H. F.et al.,“Nework Performance Effects of HTTP/1.1, CSS1, and PNG,” Jun. 24, 1997, W3 Consortium, http://www.w3.org/TR/NOTE-pipelining-970624$Id:pipeline.html, v1.48 100/10/18 19:38:45 root Exp $, pp. 1-19.
Mapp, G., “Transport Protocols—What's Wrong with TCP,” Jan. 28, 2004, LCE Lecture at http://www.ice.eng.cam.ac.uk/˜gem11,4F5-Lecture4.pdf, pp. 1-60.
Postel, J., “Trsmission Control Protocol,” Sep. 1981, Information Sciences Institute, University of Southern California, Marina del Rey, California, http://www.faqs.org/rfcs/rfc793.html, pp. 1-21.
Jacobson, V. et al., “TCP Extensions for High Performance,” May 1992, http://www.faqs.com/rfcs/rfc1323.html.
Stevens, W. “TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms,” Jan. 1997, Sunsite.dk, http://rfc.sunsite.dk/rfc/rfc2001.html, pp. 1-6.
Stevens, W. R., “TCP/IP Illustrated,” vol. 1: The Protocols, Addison-Wesley Professional, Dec. 31, 1993, pp. 1-17.
Schroeder et al, “Scalable Web ServerClustering Technologies,” IEEE Network May/Jun. 2000, pp. 38-45.
Bryhni et al., “A Comparison of Load Balancing Techniques for Scalable Web Servers,” IEEE Network, Jul./Aug. 2000, pp. 58-64.
Fielding, R. et al., “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, RFC 2068, Jan. 1997, 152 pages.
“BIG-IPs Controller with Exclusive OneConnects Content Switching Feature Provides a Breakthrough System for Maximizing Server and Network Performance,” F5 Networks, Inc., Press Release, May 8, 2001, accessed Jun. 4, 2002, 3 pages.
Hochmuth, P. “F5 CacheFlow Pump Up Content-Delivery Lines,” NetworkWorld, May 4, 2001, http://www.network.com/news/2001/0507cachingonline.html, accessed Jun. 1, 2005, 3 pages.
Office Communication for U.S. Appl. No. 11/243,844 mailed Jul. 16, 2009.
Office Communication for U.S. Appl. No. 11/243,844 mailed Feb. 19, 2010.
Office Communication for U.S. Appl. No. 11/243,844 mailed Aug. 3, 2010.
Office Communication for U.S. Appl. No. 11/243,844 mailed Dec. 27, 2010.
Office Communication for U.S. Appl. No. 11/243,844 mailed Jun. 9, 2011.
Cheng, J.M. et al., “A Fast, Highly Reliable Data Compression Chip and Algorithm for Storage Systems,” IBM, vol. 40, No. 6, Nov. 1996, 11 pages.
Simpson, W. “The Point-To-Point Protocol (PPP),” RFC 1661, Jul. 1994, 54 pages.
Schneider, K. et al “PPP for Data Compression in Data Circuit-Terminatng Equipment (DCE),” RFC 1976, Aug. 1996m 10 pages.
Castineyra, I. et al., “The Nimrod Routing Architecture,” RFC 1992, Aug. 1996, 27 pages.
Degermark, M. et al., “Low-Loss TCP/IP Header Compression for Wireless Networks,” J.C. Baiter AG, Science Publishers, Oct. 1997, pp. 375-387.
“Direct Access Storage Device Compression and Decompression Data Flow,” IBM Technical Disclosure Bulletin, vol. 38, No. 11, Nov. 1995, pp. 291-295.
“Drive Image Professional for DOS, OS/2 and Windows,” WSDC Download Guide, http://wsdcds01.watson.ibm.com/WSDC.nsf/Guides/Download/Applications-DriveImage.htm, accessed Nov. 22, 1999, 4 pages.
“Drive Image Professional,” WSDC Download Guide, http://wsdcds01.watson.ibm.com/wsdc.nsf/Guides/Download/Applications-DriveImage.htm, accessed May 3, 2001, 5 pages.
Electronic Engineering Times, Issue 759, Aug. 16, 1993, 37 pages.
Adaptive Lossless Data Compression—ALDC, IBM, Jun. 15, 1994, 2 pages.
ALDC1-5S—Adaptive Lossless Data Compression, IBM Microelectronics, May 1994, 2 pages.
ALDC1-20S—Adaptive Lossless Data Compression, IBM Microelectronics, May 1994, 2 pages.
ALDC1-40S—Adaptive Lossless Data Compression, IBM Microelectronics, May 1994, 2 pages.
ALDC-MACRO—Adaptive Lossless Data Compression, IBM Microelectronics, May 1994, 2 pages.
Craft, D. J., “Data Compression Choice No Easy Call,” Computer Technology Review, Jan. 1994, 2 pages.
“Data Compression Applications and Innovations Workshop,” Proceedings of a Workshop held in conjuction with the IEEE Data Compression Conference, Mar. 31, 1995, 123 pages.
IBM Microelectronics Comdex Fall 1993 Booth Location, 1 page.
“IBM Technology Products Introduces New Family of High-Performance Data Compression Products,” IBM Corporation, Somers, NY, Aug. 16, 1993, 6 pages.
Zebrose, K. L. “Integrating Hardware Accelerators into Internetworking Switches,” Telco Systems, Nov. 1993, 10 pages.
Readme, Powerquest Corporation, 1994-2002, 6 pages.
Costlow, T., “Sony Designs Faster, Denser Tape Drive,” Electronic Engineering Times, May 20, 1996, 2 pages.
Electronic Engineering Times, Issue 767, Oct. 11, 1993, 34 pages.
“IBM Announces New Feature for 3480 Subsystem,” Tucson Today, vol. 12, No. 337, Jul. 25, 1989, 1 page.
Craft, D. J., “A Fast Hardware Data Compression Algorithm and Some Algorithmic Extensions,” IBM journal of Research and Development, vol. 42, No. 6, Nov. 1998, 14 pages.
“Magstar and IBM 3590 High Performance Tape Subsystem Technical Guide,” IBM, Nov. 1996, 287 pages.
Braden, R., “Requirements for Internet Hosts,” RFC 1122, Oct. 1989, 101 pages.
About Computing & Technology, “Wireless/Networking, Nagle Algorithm,” 2 pages http://compnetworking.about.com/oditcpip/l/bidef—nagle.html accessed Dec. 6, 2005.
Australia's Academic and Research Network, “Programs and large MTU, Nagle Algorithm,” 3 pages http://www.aaret.edu.au/engineering/networkdesign/mtu/programming.html accessed Dec. 9, 2005.
Berners-Lee, T. et al., “Hypertext Transfer Protocol—HTTP/1.0,” RFC 1945, May 1996, 51 pages.
Dierks, T. et al,, “The TLS Protocol, Version 1.0,” RFC 2246, 1999, 80 pages.
Fielding, R. et al., “Hypertext Transfer Protocol—HTTP/1.1,” RFC 2616, Jun. 19999, 114 pages.
fifi.org, “Manpage of TCP,” 6 pages http://www.fifi.org/cgi-bin/man2html/usr/share/man/man7/tcp.7.gz accessed Dec. 9, 2005.
Freier, A. et al., “The SSL Protocol, Version 3,0,” Netscape Communications Corporation, Mar. 1996, 60 pages.
Kessler, G. et al., “A Primer on Internet and TCP/IP Tools,” RFC 1739, Dec. 1994, 46 pages.
Nagle, J., “Congestion Control in IP/TCP internetworks,” RFC 896, Jan. 6, 1984, 13 pages.
OpenSSL, 1 page www.openssl.org accessed Apr. 12, 2006.
Paxson, V., “Known TCP Implementation Problems,” RFC 2525, Mar. 1999, 61 pages.
Rescorla, E., “SSL and TLS, Designing and Building Secure Systems,” 2001, Addison-Wesley, 46 pages.
RSA Laboratories, “PKCS #1 v2.0: RSA Cryptography Standard,” Oct. 1, 1998, 35 pages.
SearchNetworking.com, “Nagle's Algorithm,” 3 pages http://searchnetworking.techtarget.com/sDefinition/0,,sid—gci754347,00.html accessed Dec. 6, 2005.
Tormasov, A. et al., “TCP/IP Options for High-Performance Data Transmission,” 4 pages http://builder.com.com/5100-6732-1050878.html accessed Dec. 9, 2005.
W3C, “HTTP/1/1 and Nagle's Algorithm,” 3 pages http://www.w3.org/Protocols/HTTP/Performance/Nagle accessed Dec. 6, 2005.
Official Communication for U.S. Appl. No. 11/243,844 mailed Oct. 8, 2008.
Office Communication for U.S. Appl. No. 11/243,844 mailed Feb. 20, 2009.
Office Communication for U.S. Appl. No. 11/366,367 mailed Aug. 22, 2008.
Office Communication for U.S. Appl. No. 11/366,367 mailed Feb. 11, 2009.
Office Communication for U.S. Appl. No. 11/366,367 mailed Aug. 4, 2009.
Office Communication for U.S. Appl. No. 11/366,367 mailed Mar. 3, 2010.
Office Communication for U.S. Appl. No. 11/366,367 mailed Jun. 9, 2010.
Office Communication for U.S. Appl. No. 11/366,367 mailed Nov. 9, 2010.
Office Communication for U.S. Appl. No. 11/366,367 mailed Mar. 11, 2011.
Office Communication for U.S. Appl. No. 11/366,367 mailed Jun. 1, 2011.
Official Communication in U.S. Appl. No. 10/719,375, mailed Feb. 27, 2013.
Provisional Applications (1)
Number Date Country
60764005 Feb 2006 US
Continuations (1)
Number Date Country
Parent 11366367 Mar 2006 US
Child 13229483 US