Methods for traffic dependent direct memory access optimization and devices thereof

Information

  • Patent Grant
  • 11855898
  • Patent Number
    11,855,898
  • Date Filed
    Thursday, March 14, 2019
    5 years ago
  • Date Issued
    Tuesday, December 26, 2023
    6 months ago
  • Inventors
  • Original Assignees
    • F5, Inc. (Seattle, WA, US)
  • Examiners
    • Wyllie; Christopher T
    Agents
    • Troutman Pepper Hamilton Sanders LLP (F5 PATENTS)
Abstract
Methods, non-transitory computer readable media, network traffic management apparatuses, and network traffic management systems include inspecting a plurality of incoming packets to obtain packet header data for each of the incoming packets. The packet header data is filtered using one or more filtering criteria. At least one of a plurality of optimized DMA behavior mechanisms for each of the incoming packets are selected based on associating the filtered header data for each of the incoming packets with stored profile data. The incoming packets are disaggregated based on the corresponding selected one of the optimized DMA behavior mechanisms.
Description
FIELD

This technology generally relates to traffic dependent direct memory access (DMA) optimization.


BACKGROUND

Today, software and hardware interact in a rigid way. Network interface controllers (NICs) provide only a single DMA (direct memory access) behavior mechanism, such as deciding how to lay down buffers and interact with the driver. With a single DMA behavior mechanism, upon receiving packets by the NICs, the packets are transmitted to a queue. This transmission of the packets into the queue by the NICs is independent of the contents included in the packet and the flow associated with the packet. However, different types of packets based on their content would have increased efficiency if alternate DMA behavior mechanisms were utilized, such as mechanisms with improved use of CPU cache, CPU, or DRAM.


SUMMARY

A method, implemented by a network traffic management system comprising one or more network traffic management apparatuses, server devices or client devices, includes inspecting a plurality of incoming packets to obtain packet header data for each of the incoming packets. The packet header data is filtered using one or more filtering criteria. At least one of a plurality of optimized DMA behavior mechanisms for each of the incoming packets are selected based on associating the filtered header data for each of the incoming packets with stored profile data. The incoming packets are disaggregated based on the corresponding selected one of the optimized DMA behavior mechanisms.


A network traffic management apparatus including memory including programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to inspect a plurality of incoming packets to obtain packet header data for each of the incoming packets. The packet header data is filtered using one or more filtering criteria. At least one of a plurality of optimized DMA behavior mechanisms for each of the incoming packets are selected based on associating the filtered header data for each of the incoming packets with stored profile data. The incoming packets are disaggregated based on the corresponding selected one of the optimized DMA behavior mechanisms.


A non-transitory computer readable medium having stored thereon instructions for including executable code that, when executed by one or more processors, causes the processors to inspect a plurality of incoming packets to obtain packet header data for each of the incoming packets. The packet header data is filtered using one or more filtering criteria. At least one of a plurality of optimized DMA behavior mechanisms for each of the incoming packets are selected based on associating the filtered header data for each of the incoming packets with stored profile data. The incoming packets are disaggregated based on the corresponding selected one of the optimized DMA behavior mechanisms.


A network traffic management system includes one or more traffic management modules, server modules, or client modules, memory comprising programmed instructions stored thereon, and one or more processors configured to be capable of executing the stored programmed instructions to a plurality of incoming packets to obtain packet header data for each of the incoming packets. The packet header data is filtered using one or more filtering criteria. At least one of a plurality of optimized DMA behavior mechanisms for each of the incoming packets are selected based on associating the filtered header data for each of the incoming packets with stored profile data. The incoming packets are disaggregated based on the corresponding selected one of the optimized DMA behavior mechanisms.


This technology has a number of advantages including providing methods, non-transitory computer readable media, network traffic management apparatuses, and network traffic management systems that provide increased efficiency with network load balancing by intelligently processing different types of traffic with a correlated one of a plurality of DMA behavior mechanism. This correlation improves CPU, CPU cache, DRAM or Bus Bandwidth utilization resulting in an improved experience for client devices. As a result, this technology provides optimized utilization of NIC storage to eliminate duplicated data to reduce size of entry tables which reduces costs for utilizing expensive storage devices.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram of an exemplary network traffic management system with a network traffic management apparatus;



FIG. 2 is a block diagram of an exemplary network traffic management apparatus;



FIG. 3 is a functional block diagram of the Network Interface Controller in the exemplary network traffic management apparatus shown in FIG. 2;



FIG. 4 is a block diagram of an example of a data store in the exemplary network traffic management apparatus shown in FIG. 2.



FIG. 5 is a flowchart of an exemplary method for traffic dependent direct memory access optimization; and



FIG. 6 is a functional block diagram of an example of a disaggregation in the exemplary method for traffic dependent direct memory access optimization shown in FIG. 5; and



FIG. 7 is a functional block diagram of an example of filtering in the exemplary method for traffic dependent direct memory access optimization shown in FIG. 5.





DETAILED DESCRIPTION

Referring to FIG. 1, an exemplary network environment that


incorporates an exemplary network traffic management system 10 is illustrated. The network traffic management system 10 in this example includes a network traffic management apparatus 12 that is coupled to server devices 14(1)-14(n), and client devices 16(1)-16(n) via communication network(s) 18(1) and 18(2), although the network traffic management apparatus 12, server devices 14(1)-14(n), and client devices 16(1)-16(n) may be coupled together via other topologies. The network traffic management system 10 may include other network devices such as one or more routers or switches, for example, which are known in the art and thus will not be described herein. This technology provides a number of advantages including methods, non-transitory computer readable media, network traffic management systems, and network traffic management apparatuses that provide an increased efficiency of network load balancing by intelligently processing different types of traffic with a correlated one of a plurality of DMA behavior mechanism.


In this particular example, the network traffic management apparatus 12, server devices 14(1)-14(n), and client devices 16(1)-16(n) are disclosed in FIG. 1 as dedicated hardware devices. However, one or more of the network traffic management apparatus 12, server devices 14(1)-14(n), or client devices 16(1)-16(n) can also be implemented in software within one or more other devices in the network traffic management system 10. As used herein, the term “module” refers to either an implementation as a dedicated hardware device or apparatus, or an implementation in software hosted by another hardware device or apparatus that may be hosting one or more other software components or implementations.


As one example, the network traffic management apparatus 12, as well as any of its components, models, or applications, can be a module implemented as software executing on one of the server devices 14(1)-14(n), and many other permutations and types of implementations can also be used in other examples. Moreover, any or all of the network traffic management apparatus 12, server devices 14(1)-14(n), and client devices 16(1)-16(n), can be implemented, and may be referred to herein, as a module.


Referring to FIGS. 1-4, the network traffic management apparatus of the network traffic management system 10 may perform any number of functions including managing network traffic, load balancing network traffic across the server devices 14(1)-14(n), or accelerating network traffic associated with web applications hosted by the server devices 14(1)-14(n). The network traffic management apparatus 12 in this example includes one or more processor(s) 20, a memory 22, and a communication interface 24, which are coupled together by a bus 26, although the network traffic management apparatus 12 can include other types or numbers of elements in other configurations.


The processor(s) 20 of the network traffic management apparatus 12 may execute programmed instructions stored in the memory 22 of the network traffic management apparatus 12 for any number of the functions identified above. The processor(s) 20 of the network traffic management apparatus 12 may include one or more central processing units (CPUs) or general purpose processors with one or more processing cores, for example, although other types of processor(s) can also be used.


The memory 22 of the network traffic management apparatus 12 stores these programmed instructions for one or more aspects of the present technology as described and illustrated herein, although some or all of the programmed instructions could be stored elsewhere. A variety of different types of memory storage devices, such as random access memory (RAM), read only memory (ROM), hard disk, solid state drives, flash memory, or other computer readable medium which is read from and written to by a magnetic, optical, or other reading and writing system that is coupled to the processor(s) 20, can be used for the memory 22.


Accordingly, the memory 22 of the network traffic management apparatus 12 can store one or more applications that can include computer executable instructions that, when executed by the network traffic management apparatus 12, cause the network traffic management apparatus 12 to perform actions, such as to transmit, receive, or otherwise process messages, for example, and to perform other actions described and illustrated below with reference to FIGS. 5-7. The application(s) can be implemented as components of other applications. Further, the application(s) can be implemented as operating system extensions, plugins, or the like.


Even further, the application(s) may be operative in a cloud-based computing environment. The application(s) can be executed within or as virtual machine(s) or virtual server(s) that may be managed in a cloud-based computing environment. Also, the application(s), and even the network traffic management apparatus 12 itself, may be located in virtual server(s) running in a cloud-based computing environment rather than being tied to one or more specific physical network computing devices. Also, the application(s) may be running in one or more virtual machines (VMs) executing on the network traffic management apparatus 12. Additionally, in one or more examples of this technology, virtual machine(s) running on the network traffic management apparatus 12 may be managed or supervised by a hypervisor.


In this particular example, the memory 22 of the network traffic management apparatus 12 may include a filtering criteria data storage 28, data store 32, profile data storage 30, although the memory 22 can include other policies, modules, databases, or applications, for example.


The filtering criteria stored in the filtering criteria data storage 28 may include by way of example, virtual local area network (VLAN) data criteria, listening data criteria, pool members data criteria, and self-internet protocol (IP) criteria, although other types and numbers of criteria may be used.


The profile data storage 30 may include profile data, such as Layer-4 Proxy Profile (L4 proxy profile), Partial Layer-7 proxy profile (Partial L7 proxy profile), Full Layer-7 proxy profile (Full L7 proxy profile) and associated rules for selecting one or more DMA behavior mechanisms.


The L4 proxy profile includes one or more policies used to make a determination about the particular processing of each packet. With the L4 proxy profile, processing is on a packet by packet basis and acknowledgements are not required which reduces overhead.


The Partial L7 proxy profile in the profile data storage 30 includes one or more policies for processing packets associated with a flow, such as HTTP packets by way of example. With the Partial L7 proxy profile, packets are processed on a per-flow-basis instead of packet by packet basis and then allocated to a particular one of the server devices 14(1)-14(n) based on one of plurality identified types of flow.


The Full L7 proxy profile in the profile data storage 30 also includes one or more policies for processing packets associated with a flow, such as HTTP packets by way of example. With the Full L7 proxy profile, all the packets associated with a flow are collected and then processed. By way of example, this processing may comprise examining the full stream of traffic for virus or attack signatures. By way of another example, this processing may include examining an HTTP request's URL for special processing such as compression, although other types of processing may be performed.


The stored associated rules are used to select one or more optimized DMA behavior mechanisms. By way of example only, one of the rules for selecting one of the DMA behavior mechanisms may comprise determining when a packet is processed using a L4 proxy profile. Upon determining the packet is processed using the L4 proxy profile, another one of the rules may be used to determine whether the Header DMA mechanism or the Contiguous Ring DMA mechanism is selected for processing the packet. In other examples, the rules may comprise determining whether a flow is being processed by either the Partial L7 proxy profile or the Full L7 proxy profile. Upon determining the packet is processed using either the Partial L7 proxy profile or the Full L7 proxy profile, then one of a plurality of DMA mechanisms may be selected for processing the packets in the flow.


The data store 32 may store data from multiple data sources which may be utilized for DMA selection and may include a plurality of data entry tables. By way of a further example, the data store 32 is illustrated in FIG. 4. In this example, the data store 32 has a client n-tuple of 128K data entry table 54 for storing a set of client side flow information and transform information and a server n-tuple of 128K data entry table 56 for storing a set of server side flow information and transform information. The transform information may include for example information of translating the server side IP into a client side IP while responding to a client request or vice versa. Additionally, during a filtering process the filtering criteria, virtual local area network (VLAN) data criteria, listening data criteria, pool members data criteria, or self internet protocol (IP) criteria may compared against the data stored in the client n-tuples 54 and the server n-tuples 56 in the data store 32. The filtered header data may include a set of flow and transform information for example, source IP address, destination IP address, server source, pool member, and may be compared with the set of flow information and transform information stored in the client n-tuples 54 and the server n-tuples 56. When the filtered header data in the memory is replicated in the client n-tuples 54 and the server n-tuples 56, the replicated filtered data in the client n-tuples 54 and the server n-tuples 56 of the data store may be deleted or eliminated.


Referring to FIGS. 1-2 and 4, the Network Interface Controller (NIC) 24 of the network traffic management apparatus 12 operatively couples and communicates between the network traffic management apparatus 12, server devices 14(1)-14(n), and client devices 16(1)-16(n), which are coupled together at least in part by the communication network(s) 18(1) and 18(2), although other types or numbers of communication networks or systems with other types or numbers of connections or configurations to other devices or elements can also be used.


By way of example only, the communication network(s) 18(1) and 18(2) can include local area network(s) (LAN(s)) or wide area network(s) (WAN(s)), and can use TCP/IP over Ethernet and industry-standard protocols, although other types or numbers of protocols or communication networks can be used. The communication network(s) 18(1) and 18(2) in this example can employ any suitable interface mechanisms and network communication technologies including, for example, teletraffic in any suitable form (e.g., voice, modem, and the like), Public Switched Telephone Network (PSTNs), Ethernet-based Packet Data Networks (PDNs), combinations thereof, and the like.


Referring to FIG. 3, a functional block diagram of an example of the network interface controller 24 in the exemplary network traffic management apparatus 12 is illustrated, although other configurations for the network interface controller can be used. In this example, the network interface controller 24 is configured to receive packets by the Ethernet medium access control (MAC) or port 48 which are provided to a DMA selector mechanism 36. The DMA selector mechanism 36 that inspects, filters, and selects one or more DMA behavior mechanisms. Next, after the selection by the DMS selector mechanism 36, a disaggregartor 38 disaggregates each of the packets to one of the selected DMA behavior mechanisms in Queue-140 to Queue-n 44 to be processed and then output by a dequeuing device 46.


While the network traffic management apparatus 12 is illustrated in this example as including a single device, the network traffic management apparatus 12 in other examples can include a plurality of devices or blades each having one or more processors (each processor with one or more processing cores) that implement one or more steps of this technology. In these examples, one or more of the devices can have a dedicated communication interface or memory. Alternatively, one or more of the devices can utilize the memory, communication interface, or other hardware or software components of one or more other devices included in the network traffic management apparatus 12.


Additionally, one or more of the devices that together comprise the network traffic management apparatus 12 in other examples can be stand-alone devices or integrated with one or more other devices or apparatuses, such as one or more of the server devices 14(1)-14(n), for example. Moreover, one or more of the devices of the network traffic management apparatus 12 in these examples can be in a same or a different communication network including one or more public, private, or cloud networks, for example.


Each of the server devices 14(1)-14(n) of the network traffic management system 10 in this example includes one or more processors, a memory, and a network interface controller, which are coupled together by a bus or other communication link, although other numbers and/or types of network devices could be used. The server devices 14(1)-14(n) in this example can include application servers, database servers, access control servers, or encryption servers, for example, that exchange communications along communication paths expected based on application logic in order to facilitate interactions with an application by users of the client devices 16(1)-16(n).


Accordingly, in some examples, one or more of the server devices 14(1)-14(n) process login and other requests received from the client devices 16(1)-16(n) via the communication network(s) 18(1) and 18(2) according to the HTTP-based application RFC protocol, for example. A web application may be operating on one or more of the server devices 14(1)-14(n) and transmitting data (e.g., files or web pages) to the client devices 16(1)-16(n) (e.g., via the network traffic management apparatus 12) in response to requests from the client devices 16(1)-16(n). The server devices 14(1)-14(n) may be hardware or software or may represent a system with multiple servers in a pool, which may include internal or external networks.


Although the server devices 14(1)-14(n) are illustrated as single devices, one or more actions of each of the server devices 14(1)-14(n) may be distributed across one or more distinct network computing devices that together comprise one or more of the server devices 14(1)-14(n). Moreover, the server devices 14(1)-14(n) are not limited to a particular configuration. Thus, the server devices 14(1)-14(n) may contain network computing devices that operate using a master/slave approach, whereby one of the network computing devices of the server devices 14(1)-14(n) operate to manage or otherwise coordinate operations of the other network computing devices. The server devices 14(1)-14(n) may operate as a plurality of network computing devices within a cluster architecture, a peer-to peer architecture, virtual machines, or within a cloud architecture, for example.


Thus, the technology disclosed herein is not to be construed as being limited to a single environment and other configurations and architectures are also envisaged. For example, one or more of the server devices 14(1)-14(n) can operate within the network traffic management apparatus 12 itself rather than as a stand-alone server device communicating with the network traffic management apparatus 12 via communication network(s) 18(2). In this example, the one or more of the server devices 14(1)-14(n) operate within the memory 22 of the network traffic management apparatus 12.


The client devices 16(1)-16(n) of the network traffic management system 10 in this example include any type of computing device that can exchange network data, such as mobile, desktop, laptop, or tablet computing devices, virtual machines (including cloud-based computers), or the like. Each of the client devices 16(1)-16(n) in this example includes a processor, a memory, and a communication interface, which are coupled together by a bus or other communication link (not illustrated), although other numbers or types of components could also be used.


The client devices 16(1)-16(n) may run interface applications, such as standard web browsers or standalone client applications, which may provide an interface to make requests for, and receive content stored on, one or more of the server devices 14(1)-14(n) via the communication network(s) 18(1) and 18(2). The client devices 16(1)-16(n) may further include a display device, such as a display screen or touchscreen, or an input device, such as a keyboard for example (not illustrated). Additionally, one or more of the client devices 16(1)-16(n) can be configured to execute software code (e.g., JavaScript code within a web browser) in order to log client-side data and provide the logged data to the network traffic management apparatus 12, as described and illustrated in more detail later.


Although the exemplary network traffic management system 10 with the network traffic management apparatus 12, server devices 14(1)-14(n), client devices 16(1)-16(n), and communication network(s) 18(1) and 18(2) are described and illustrated herein, other types or numbers of systems, devices, components, or elements in other topologies can be used. It is to be understood that the systems of the examples described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the examples are possible, as will be appreciated by those skilled in the relevant art(s).


One or more of the components depicted in the network security system 10, such as the network traffic management apparatus 12, server devices 14(1)-14(n), or client devices 16(1)-16(n), for example, may be configured to operate as virtual instances on the same physical machine. In other words, one or more of the network traffic management apparatus 12, server devices 14(1)-14(n), or client devices 16(1)-16(n) may operate on the same physical device rather than as separate devices communicating through communication network(s) 18(1) or 18(2). Additionally, there may be more or fewer network traffic management apparatuses, client devices, or server devices than illustrated in FIG. 1.


In addition, two or more computing systems or devices can be substituted for any one of the systems or devices in any example. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the examples. The examples may also be implemented on computer system(s) that extend across any suitable network using any suitable interface mechanisms and traffic technologies, including by way of example only, wireless traffic networks, cellular traffic networks, Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof.


The examples may also be embodied as one or more non-transitory computer readable media having instructions stored thereon, such as in the memory 22, for one or more aspects of the present technology, as described and illustrated by way of the examples herein. The instructions in some examples include executable code that, when executed by one or more processors, such as the processor(s) 20, cause the processors to carry out steps necessary to implement the methods of the examples of this technology that are described and illustrated herein.


An exemplary method of traffic dependent DMA behavior will now be described with reference to FIGS. 1-7. Referring more specifically to FIG. 5, in step 500 in this example, the network traffic management apparatus 12 receives a plurality of packets from the client devices 16(1)-16(n) requesting a service at network interface controller 24, although the packets could be received from other sources. Each of the plurality of packets may include a variety of different types of data, such as packet header data and payload data by way of example.


In step 510, the network traffic management apparatus 12 performs an inspection of the packet header data in each of the plurality of packets to determine source address data and destination address data from the packet header data, although other types of data in each of the plurality of packets could be inspected and determined.


In step 520, the network traffic management apparatus 12 filters the determined source address data and destination address data from the packet header data for each packet using one or more filtering criteria stored in the filtering criteria data storage 28 to obtain filtered data, although other data may be filtered from the packet header data. The stored filtering criteria may include by way of example, virtual local area network (VLAN) data criteria, listening data criteria, pool members data criteria, self-internet protocol (IP) criteria and/or tenant account data described further in detail below. Examples of this filtering using the virtual local area network (VLAN) data criteria are described below with reference to FIG. 7.


In this example, when the received packet is originating from a client request, and the packet is received over, for example, a VLAN 1, the packet header data is inspected to obtain a destination address data, such as a destination IP address of 10.0.0.1:80 which is a virtual server internet protocol (IP) address. The network traffic management apparatus 12 may map this virtual server IP address to stored profile data associated with for example an L4 proxy profile. This process of mapping the virtual server IP address to the profile data is considered filtering with VLAN data criteria.


In another example, when the received packet is originating from a server response and the packet is received over, for example a VLAN 2, the packet header is inspected to obtain a source address data, such as a source IP address and the port number. Since this is a response coming in from the server for which earlier load balancing decisions had been made based on the earlier client request packet header data which was associated with for example, the L4 proxy profile, this packet that is associated with the server response will also be associated with the L4 proxy profile associated with the client request by the network traffic management apparatus. Since the L4 proxy profile in this example is associated with, for example the contiguous ring DMA behavior mechanism, then the contiguous ring DMA behavior mechanism is selected to service this packet associated with the server response. This process of mapping the server response to the profile data based on listening for the client request profile is considered filtering with listening data criteria. Although two examples of filtering are provided above with reference to FIG. 7, other types filtering may be used.


By way of another example, when the network traffic management system 10 is providing TCP stack load balancing with URL matching with TLS decryption, then the network traffic management apparatus 12 may filter from the packet header data source IP which may be matched to identify packets in received traffic for load balancing.


In another example, when the network traffic management system 10 is providing network address translation (NAT'ing) behavior, then the network traffic management apparatus 12 may filter from the packet header data in received packets the local host's IP address (self-IP) in the destination IP to be used in NAT'ing behaviour. This example represents traffic where only the headers need be examined to provide the translation service.


In yet another example, a security engine in or associated with the network traffic management apparatus 12 may have determined that a specific IP time to live field value has been associated with hacker activity in traffic. Accordingly, in this example, the network traffic management apparatus 12 may filter packets in traffic matching this criteria and would send those matching packets to a full L7 proxy for deep inspection.


In step 530, the network traffic management apparatus 12 may select one or more optimized DMA behavior mechanisms from a plurality of optimized DMA behavior mechanisms for each of the packets based on the filtered packet header data. For example, the filtering which mapped the packet header data to the L4 proxy profile can now be used to select either a header behavior mechanism or contiguous ring mechanism for the packet based on the association with the L4 proxy profile. In other examples, the filtering associates the packets with other proxy profiles which are correlated to other DMA behavior mechanisms. In other examples, the filtering may select DMA behavior based on a comparison of packet length against MTU. A NAT'ing application that also employs IP fragmentation may choose to direct packets for fragmentation through a contiguous ring instead of header mechanism. In other examples, the ingress physical port may also be used in the filtering process. In a URL examination example, packets arriving from a client warrant examination and thus require a contiguous ring. Packet from servers will only require header mechanism. If the ingress physical port can be used to distinguish client from server, then the ingress physical port will be used in the filtering mechanism to select DMA behavior.


By way of example, other exemplary types of DMA behavior mechanisms which may be selected may include the scatter/gather mechanism, the contiguous ring mechanism, the dis-contiguous ring mechanism and the header behavior mechanism, although other types and/or numbers of mechanisms may be used.


With the scatter/gather mechanism, a single procedure call sequentially reads data from multiple buffers and writes it to a single data stream, or reads data from a data stream and writes it to multiple buffers. The buffers are given in a vector of buffers, and the packets in this scatter/gather mechanism are long lived. The scatter/gather mechanism refers to the process of gathering data from, or scattering data into, the given set of buffers. A scatter/gather I/O can operate synchronously or asynchronously and may be used for efficiency and convenience.


With the contiguous ring mechanism, upon receiving the packets from the client, a buffer holds the packets for a pre-determined period of time before the packets are transmitted to the server. With this contiguous ring mechanism, the packets arrive inline and are processed in a standard manner upon arrival. The size of the contiguous ring dictates the amount of data that can be stored and a fixed maximum speed of data transfer provides a maximum time in which the incoming packet must be processed.


The dis-contiguous ring mechanism is an optimized form of the contiguous ring mechanism. The dis-contiguous ring mechanism resembles the contiguous ring mechanism, however with the dis-contiguous ring mechanism when the data arrives a determination is made to identify which of the arrived data requires a longer processing time than a threshold. Upon the determining that data requires a longer processing time, that data is skipped which may result in holes or gaps in the buffer for this skipped data.


With the header behavior mechanism, upon receiving the packets the packet header data, payload data including predictably located values may be inspected and based on the inspected data the packets are then forwarded.


In step 540, the network traffic management apparatus 12 disaggregates each of the plurality of incoming packets based on the corresponding selected one of the optimized DMA behavior mechanisms to the one or more queues, wherein the queues are a data structure for processing packets as illustrated by way of example back in FIG. 3. In this example, the packets are disaggregated by the disaggregator 38 into one of the Queue-140 to Queue-n 44 to be processed using the selected DMA behavior mechanisms for the particular Queue-140 to Queue-n 44, such as the selected scatter/gather mechanism, contiguous ring mechanism, header behavior mechanism, or dis-contiguous ring behavior mechanism by way of example.


Accordingly, as illustrated by way of the examples herein, an increased efficiency with network load balancing is provided by this technology intelligently processing different types of traffic with an associated DMA behavior mechanism which improves CPU, CPU cache, DRAM or Bus Bandwidth utilization resulting in improved experience for users of client devices. As a result, this technology provides advantages of optimized utilization of NIC storage to eliminate duplicated data to reduce size of entry tables which reduces costs for utilizing expensive storage devices.


Having thus described the basic concept of the invention, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the invention. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the invention is limited only by the following claims and equivalents thereto.

Claims
  • 1. A method for filtering and processing incoming packets implemented by a network traffic management system comprising one or more network traffic management apparatuses, server devices or client devices, the method comprising: inspecting an incoming packet to obtain packet header data;filtering the obtained packet header data for the incoming packet using a filtering criteria;associating the filtered header data for the incoming packet with a stored network profile data, wherein the stored network profile data comprises proxy profiles and wherein the filtered header data comprises filtered source address data or destination address data from the packet header data which are associated with the proxy profiles;selecting an optimized one of a plurality DMA behavior mechanism for the incoming packet based on the associated stored network profile data by using the proxy profiles associated with the filtered header data, the proxy profiles being correlated to one of the DMA behavior mechanisms, wherein the proxy profiles comprise a layer-4 proxy profile, a partial layer-7 proxy profile, or a full layer-7 proxy profile; anddisaggregating the incoming packet based on the corresponding selected optimized one of the plurality DMA behavior mechanism.
  • 2. The method of claim 1, wherein the stored profile data comprises a rule for selecting the optimized DMA behavior mechanism; and wherein the optimized DMA behavior mechanism comprise a scatter and gather mechanism, a contiguous ring mechanism, a dis-contiguous ring mechanism, or a header behavior mechanism.
  • 3. The method of claim 1, wherein the filtering criteria comprises virtual local area network (VLAN) data criteria, listening data criteria, pool members data criteria, or self-internet protocol (IP) criteria.
  • 4. The method of claim 3, wherein the filtering the obtained packet header data for the incoming packet further comprises filtering using the listening data criteria, wherein the listening data criteria comprises listening for a virtual server internet protocol (IP) address; and wherein the selecting further comprises selecting the contiguous ring mechanism or the header behavior mechanism for disaggregating the incoming packet, upon determining that the virtual server IP address matches a rule for selecting the contiguous ring mechanism or the header behavior mechanism.
  • 5. The method of claim 1, further comprising selecting the optimized DMA behavior mechanism based a policy associated with a tenant.
  • 6. A network traffic management apparatus, comprising a memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to: inspect an incoming packet to obtain packet header data;filter the obtained packet header data for the incoming packet using a filtering criteria;associate the filtered header data for the incoming packet with a stored network profile data, wherein the stored network profile data comprises proxy profiles and wherein the filtered header data comprises filtered source address data or destination address data from the packet header data which are associated with the proxy profiles;select an optimized one of a plurality DMA behavior mechanism for the incoming packet based on the associated stored network profile data by using the proxy profiles associated with the filtered header data, the proxy profiles each being correlated to one of the DMA behavior mechanisms, wherein the proxy profiles comprise a layer-4 proxy profile, a partial layer-7 proxy profile, or a full layer-7 proxy profile; anddisaggregate the incoming packet based on the corresponding selected optimized one of the plurality DMA behavior mechanism.
  • 7. The network traffic management apparatus of claim 6, wherein the stored profile data comprises a rule for selecting the optimized DMA behavior mechanism; and wherein the optimized DMA behavior mechanism comprise a scatter and gather mechanism, a contiguous ring mechanism, a dis-contiguous ring mechanism, or a header behavior mechanism.
  • 8. The network traffic management apparatus of claim 6, wherein the one or more filtering criteria comprises virtual local area network (VLAN) data criteria, listening data criteria, pool members data criteria, or self-internet protocol (IP) criteria.
  • 9. The network traffic management apparatus of claim 8, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions for the filter and the select to: filter using the listening data criteria, wherein the listening data criteria comprises listening for a virtual server internet protocol (IP) address; andselect the contiguous ring mechanism or the header behavior mechanism for disaggregating the incoming packet, upon determining that the virtual server IP address matches a rule for selecting the contiguous ring mechanism or the header behavior mechanism.
  • 10. The network traffic management apparatus of claim 6, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: select the optimized DMA behavior mechanism based a policy associated with a tenant.
  • 11. A non-transitory computer readable medium having stored thereon instructions for filtering and processing incoming packets comprising executable code which when executed by one or more processors, causes the one or more processors to: inspect an incoming packet to obtain packet header data;filter the obtained packet header data for the incoming packet using a filtering criteria;associate the filtered header data for the incoming packet with a stored network profile data, wherein the stored network profile data comprises proxy profiles and wherein the filtered header data comprises filtered source address data or destination address data from the packet header data which are associated with the proxy profiles;select an optimized one of a plurality DMA behavior mechanism for the incoming packet based on the associated stored network profile data by using the proxy profiles associated with the filtered header data, the proxy profiles each being correlated to one of the DMA behavior mechanisms, wherein the proxy profiles comprise a layer-4 proxy profile, a partial layer-7 proxy profile, or a full layer-7 proxy profile; anddisaggregate the incoming packet based on the corresponding selected optimized one of the plurality DMA behavior mechanism.
  • 12. The non-transitory computer readable medium of claim 11, wherein the stored profile data comprises a rule for selecting the optimized DMA behavior mechanism; and wherein the optimized DMA behavior mechanism comprise a scatter and gather mechanism, a contiguous ring mechanism, a dis-contiguous ring mechanism, or a header behavior mechanism.
  • 13. The non-transitory computer readable medium of claim 11, wherein the one or more filtering criteria comprises virtual local area network (VLAN) data criteria, listening data criteria, pool members data criteria, or self-internet protocol (IP) criteria.
  • 14. The non-transitory computer readable medium of claim 13, wherein the executable code when executed by the one or more processors further causes the one or more processors for the filter and the select to: filter using the listening data criteria, wherein the listening data criteria comprises listening for a virtual server internet protocol (IP) address; andselect the contiguous ring mechanism or the header behavior mechanism for disaggregating the incoming packet, upon determining that the virtual server IP address matches a rule for selecting the contiguous ring mechanism or the header behavior mechanism.
  • 15. The non-transitory computer readable medium of claim 11, wherein the executable code when executed by the one or more processors further causes the one or more processors to: select the optimized DMA behavior mechanism based a policy associated with a tenant.
  • 16. A network traffic management system, comprising one or more traffic management apparatuses, client devices, or server devices, the network traffic management system comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to: inspect an incoming packet to obtain packet header data;filter the obtained packet header data for the incoming packet using a filtering criteria;associate the filtered header data for the incoming packet with a stored network profile data, wherein the stored network profile data comprises proxy profiles and wherein the filtered header data comprises filtered source address data or destination address data from the packet header data which are associated with the proxy profiles;select an optimized one of a plurality DMA behavior mechanism for the incoming packet based on the associated stored network profile data by using the proxy profiles associated with the filtered header data, the proxy profiles each being correlated to one of the DMA behavior mechanisms, wherein the proxy profiles comprise a layer-4 proxy profile, a partial layer-7 proxy profile, or a full layer-7 proxy profile; anddisaggregate the incoming packet based on the corresponding selected optimized one of the plurality DMA behavior mechanism.
  • 17. The network traffic management system of claim 16, wherein the stored profile data comprises a rule for selecting the optimized DMA behavior mechanism; and wherein the optimized DMA behavior mechanism comprise a scatter and gather mechanism, a contiguous ring mechanism, a dis-contiguous ring mechanism, or a header behavior mechanism.
  • 18. The network traffic management system of claim 16, wherein the one or more filtering criteria comprises virtual local area network (VLAN) data criteria, listening data criteria, pool members data criteria, or self-internet protocol (IP) criteria.
  • 19. The network traffic management system of claim 18, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions for the filter and the select to: filter using the listening data criteria, wherein the listening data criteria comprises listening for a virtual server internet protocol (IP) address; andselect the contiguous ring mechanism or the header behavior mechanism for disaggregating the incoming packet, upon determining that the virtual server IP address matches a rule for selecting the contiguous ring mechanism or the header behavior mechanism.
  • 20. The network traffic management system of claim 16, wherein the one or more processors are further configured to be capable of executing the stored programmed instructions to: select the optimized DMA behavior mechanism based a policy associated with a tenant.
Parent Case Info

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/642,725 filed Mar. 14, 2018, which is hereby incorporated by reference in its entirety.

US Referenced Citations (216)
Number Name Date Kind
4914650 Sriram Apr 1990 A
5388237 Sodos Feb 1995 A
5477541 White et al. Dec 1995 A
5588128 Hicok et al. Dec 1996 A
5699361 Ding et al. Dec 1997 A
5742765 Wong et al. Apr 1998 A
5761534 Lundberg et al. Jun 1998 A
5797033 Ecclesine Aug 1998 A
5809334 Galdun et al. Sep 1998 A
5809560 Schneider Sep 1998 A
5828835 Isfeld et al. Oct 1998 A
5940838 Schmuck et al. Aug 1999 A
5941988 Bhagwat et al. Aug 1999 A
5948080 Baker Sep 1999 A
6026090 Benson et al. Feb 2000 A
6026443 Oskouy et al. Feb 2000 A
6055619 North Apr 2000 A
6070219 McAlpine et al. May 2000 A
6070230 Capps May 2000 A
6115802 Tock et al. Sep 2000 A
6347337 Shah et al. Feb 2002 B1
6347347 Brown et al. Feb 2002 B1
6385624 Shinkai May 2002 B1
6388989 Malhortra May 2002 B1
6529508 Li et al. Mar 2003 B1
6574220 Petty Jun 2003 B1
6647438 Connor Nov 2003 B1
6700871 Harper et al. Mar 2004 B1
6748457 Fallon et al. Jun 2004 B2
6781990 Puri et al. Aug 2004 B1
6785236 Lo et al. Aug 2004 B1
6820133 Grove et al. Nov 2004 B1
6904040 Salapura et al. Apr 2005 B2
6915404 Desai et al. Jul 2005 B1
6934776 Connor et al. Aug 2005 B2
6999457 Shinohara Feb 2006 B2
7046628 Luhmann et al. May 2006 B2
7065630 Ledebohm et al. Jun 2006 B1
7107348 Shimada et al. Sep 2006 B2
7117308 Mitten et al. Oct 2006 B1
7120753 Accapadi et al. Oct 2006 B2
7124196 Hooper Oct 2006 B2
7142540 Hendel et al. Nov 2006 B2
7164678 Connor Jan 2007 B2
7174393 Boucher et al. Feb 2007 B2
7181457 Reinauer et al. Feb 2007 B2
7203815 Haswell Apr 2007 B2
7236491 Tsao et al. Jun 2007 B2
7272150 Bly et al. Sep 2007 B2
7281030 Davis Oct 2007 B1
7324525 Fuhs et al. Jan 2008 B2
7327674 Eberle et al. Feb 2008 B2
7349405 Deforche Mar 2008 B2
7353326 Cho et al. Apr 2008 B2
7355977 Li Apr 2008 B1
7359321 Sindhu et al. Apr 2008 B1
7376772 Fallon May 2008 B2
7403542 Thompson Jul 2008 B1
7411957 Stacy et al. Aug 2008 B2
7415034 Muller et al. Aug 2008 B2
7420931 Nanda et al. Sep 2008 B2
7457313 Patrick Nov 2008 B2
7475122 Azpitarte Jan 2009 B2
7478186 Onufryk et al. Jan 2009 B1
7493398 Bush Feb 2009 B2
7496689 Sharp et al. Feb 2009 B2
7496695 Go et al. Feb 2009 B2
7500028 Yamagishi Mar 2009 B2
7512078 Swain Mar 2009 B2
7512721 Olson Mar 2009 B1
7533197 Leonard et al. May 2009 B2
7552232 Helmer, Jr. et al. Jun 2009 B2
7558910 Alverson et al. Jul 2009 B2
7571299 Loeb Aug 2009 B2
7590753 Wolde et al. Sep 2009 B2
7620046 Ronciak et al. Nov 2009 B2
7620071 Makineni et al. Nov 2009 B2
7621162 Bartky Nov 2009 B2
7647416 Chiang et al. Jan 2010 B2
7649882 Stiliadis Jan 2010 B2
7657659 Lambeth et al. Feb 2010 B1
7660916 Moskalev et al. Feb 2010 B2
7668727 Mitchell et al. Feb 2010 B2
7668851 Triplett Feb 2010 B2
7710989 Chew May 2010 B2
7729239 Aronov et al. Jun 2010 B1
7734809 Joshi et al. Jun 2010 B2
7735099 Micalizzi, Jr. Jun 2010 B1
7742412 Medina Jun 2010 B1
7784093 Deng et al. Aug 2010 B2
7813277 Okholm et al. Oct 2010 B2
7826487 Mukerji et al. Nov 2010 B1
7835391 Chateau et al. Nov 2010 B2
7840841 Huang et al. Nov 2010 B2
7877524 Annem et al. Jan 2011 B1
7916728 Mimms Mar 2011 B1
7929433 Husak et al. Apr 2011 B2
7930444 Shasha et al. Apr 2011 B2
7936772 Kashyap May 2011 B2
7975025 Szabo et al. Jul 2011 B1
7991918 Jha et al. Aug 2011 B2
7996569 Aloni et al. Aug 2011 B2
8001430 Shasha et al. Aug 2011 B2
8006016 Muller et al. Aug 2011 B2
8077620 Solomon et al. Dec 2011 B2
8099528 Millet et al. Jan 2012 B2
8103809 Michels et al. Jan 2012 B1
8112491 Michels et al. Feb 2012 B1
8112594 Giacomoni et al. Feb 2012 B2
8233380 Subramanian et al. Jul 2012 B2
8274976 Aloni Sep 2012 B2
8279865 Giacomoni et al. Oct 2012 B2
8306036 Bollay et al. Nov 2012 B1
8346993 Michels et al. Jan 2013 B2
8417842 Vu Apr 2013 B2
8447884 Baumann May 2013 B1
8447897 Xu et al. May 2013 B2
8448234 Mondaeev et al. May 2013 B2
8630173 Sundar Jan 2014 B2
8762595 Muller Jun 2014 B1
8799403 Chan et al. Aug 2014 B2
8848715 Izenberg et al. Sep 2014 B2
8880632 Michels et al. Nov 2014 B1
8880696 Michels et al. Nov 2014 B1
8984178 Michels et al. Mar 2015 B2
9032113 Conroy et al. May 2015 B2
9036822 Jain et al. May 2015 B1
9152483 Michels et al. Oct 2015 B2
9270602 Michels et al. Feb 2016 B1
9313047 Michels et al. Apr 2016 B2
10177795 Frishman Jan 2019 B1
20010038629 Shinohara Nov 2001 A1
20010042200 Lamberton et al. Nov 2001 A1
20010043614 Viswanadham Nov 2001 A1
20020133586 Shanklin Sep 2002 A1
20020143955 Shimada et al. Oct 2002 A1
20020156927 Boucher et al. Oct 2002 A1
20030067930 Salapura Apr 2003 A1
20030204636 Greenblat et al. Oct 2003 A1
20040013117 Hendel et al. Jan 2004 A1
20040032830 Bly et al. Feb 2004 A1
20040062245 Sharp et al. Apr 2004 A1
20040098538 Horn et al. May 2004 A1
20040202161 Stachura et al. Oct 2004 A1
20040249881 Jha et al. Dec 2004 A1
20040249948 Sethi et al. Dec 2004 A1
20040267897 Hill et al. Dec 2004 A1
20050007991 Ton et al. Jan 2005 A1
20050050364 Feng Mar 2005 A1
20050063307 Samuels et al. Mar 2005 A1
20050083952 Swain Apr 2005 A1
20050091390 Helmer et al. Apr 2005 A1
20050114559 Miller May 2005 A1
20050141427 Bartky Jun 2005 A1
20050144394 Komaria et al. Jun 2005 A1
20050154825 Fair Jul 2005 A1
20050175014 Patrick Aug 2005 A1
20050193164 Royer Sep 2005 A1
20050213570 Stacy et al. Sep 2005 A1
20050226234 Sano et al. Oct 2005 A1
20060007928 Sangillo Jan 2006 A1
20060015618 Freimuth et al. Jan 2006 A1
20060067349 Ronciak et al. Mar 2006 A1
20060104303 Makineni et al. May 2006 A1
20060174324 Zur et al. Aug 2006 A1
20060221832 Muller et al. Oct 2006 A1
20060221835 Sweeney Oct 2006 A1
20060221990 Muller Oct 2006 A1
20060224820 Cho et al. Oct 2006 A1
20060235996 Wolde et al. Oct 2006 A1
20060271740 Mark et al. Nov 2006 A1
20060288128 Moskalev et al. Dec 2006 A1
20070005904 Lemoal et al. Jan 2007 A1
20070073915 Go Mar 2007 A1
20070094452 Fachan Apr 2007 A1
20070106849 Moore et al. May 2007 A1
20070162619 Aloni et al. Jul 2007 A1
20080040538 Matsuzawa et al. Feb 2008 A1
20080126509 Subramanian et al. May 2008 A1
20080184248 Barua et al. Jul 2008 A1
20080201772 Mondaeev et al. Aug 2008 A1
20080219159 Chateau et al. Sep 2008 A1
20080219279 Chew Sep 2008 A1
20090003204 Okholm et al. Jan 2009 A1
20090007266 Wu et al. Jan 2009 A1
20090016217 Kashyap Jan 2009 A1
20090089619 Huang et al. Apr 2009 A1
20090106470 Sharma Apr 2009 A1
20090138420 Swift et al. May 2009 A1
20090144130 Grouf et al. Jun 2009 A1
20090154459 Husak et al. Jun 2009 A1
20090222598 Hayden Sep 2009 A1
20090248911 Conroy et al. Oct 2009 A1
20090279559 Wong et al. Nov 2009 A1
20100023595 McMillan Jan 2010 A1
20100082849 Millet et al. Apr 2010 A1
20100085875 Solomon et al. Apr 2010 A1
20100094945 Chan et al. Apr 2010 A1
20100174824 Aloni Jul 2010 A1
20110110380 Muller et al. May 2011 A1
20110134915 Srinivasan Jun 2011 A1
20110228781 Izenberg et al. Sep 2011 A1
20110258337 Wang Oct 2011 A1
20110307663 Kultursay Dec 2011 A1
20120191800 Michels et al. Jul 2012 A1
20120195192 Matthews Aug 2012 A1
20130166793 Shahar Jun 2013 A1
20130250777 Ziegler Sep 2013 A1
20140032695 Michels et al. Jan 2014 A1
20140185442 Newman et al. Jul 2014 A1
20140301207 Durand et al. Oct 2014 A1
20170235702 Horie Aug 2017 A1
20180322913 Friedman Nov 2018 A1
20190163441 Hagspiel May 2019 A1
20190392297 Lau Dec 2019 A1
20210152659 Cai May 2021 A1
Foreign Referenced Citations (5)
Number Date Country
0803819 Oct 1997 EP
1813084 Aug 2007 EP
WO 2004079930 Sep 2004 WO
WO 2006055494 May 2006 WO
WO 2009158680 Dec 2009 WO
Non-Patent Literature Citations (26)
Entry
“Cavium Networks Product Selector Guide,” Single & Multi-Core MIPS Processors, Security Processors and Accelerator Boards, Cavium Networks, pp. 1-44, Spring 2008.
“Comtech AHA Announces 3.0 Gbps GZIP Compression/Decompression Accelerator, AHA362-PCIX offers high-speed GZIP compression and decompression,” News Release, Comtech AHA Corporation, Moscow, ID, pp. 1-2, Apr. 20, 2005.
“Comtech AHA Announces GZIP Compression and Decompression IC Offers the highest speed and compression ratio performance in hardware on the market,” News Release, Comtech AHA Corporation, Moscow, ID, pp. 1-2, Jun. 26, 2007.
“Gigabit Ethernet/PCI Network Interface Card; Host/NIC Software Interface Definition,” Revision 12.4.13, P/N 020001, Alteon WebSystems, Inc., pp. 1-80, Jul. 1999.
“Memory Mapping and DMA,” Linux Device Drivers, Third Edition, Chapter 15, pp. 412-463, Jan. 21, 2005.
“NITROX™ XL Security Acceleration Modules, PCI 3V or 3V/5V-Universal Boards for SSL and IPSec,” Cavium Networks, 1 pp, 2002.
“PCI, PCI-X,” Cavium Networks, Product Sheet, 1 pp, last retrieved Oct. 24, 2008.
“Plan 9 Kernel History: Overview,” Lucent Technologies, pp. 1-16, last accessed Oct. 22, 2007.
Agrawala, Ashok K., “Operating Systems,” Operating System Concepts, pp. 1-24, 2004.
“Layer 4/7 Switching and Other Custom IP Traffic Processing Using the NEPPI API,” Bell Laboratories, pp. 1-11, 1999.
“Direct Memory Access (DMA) and Interrupt Handling,” EventHelix.com, pp. 1-4, last accessed Jan. 29, 2010.
“TCP-Transmission Control Protocol (TCP Fast Retransmit and Recovery),” EventHelix.com, pp. 1-5, Mar. 28, 2002.
Goglin et al., “Performance Analysis of Remote File System Access Over a High-Speed Local Networks,” 18th International Parallel and Distributed Processing Symposium, pp. 1-8, Apr. 26-30, 2004.
Harvey et al., “DMA Fundamentals on Various PC Platforms,” National Instruments, Application Note 011, pp. 1-18, Apr. 1991.
Kim, Hyong-youb, “TCP Offload Through Connection Handoff,” Thesis submitted to Rice University, pp. 1-132, Aug. 2006.
Kurmann et al., “A Comparison of Two Gigabit SAN/LAN Technologies: Scalable Coherent Interface Versus Myrinet,” Proceedings of SCI Europe '98 Conference, EMMMSEC'98, pp. 1-12, Sep. 28-30, 1998.
Mangino, John, “Using DMA With High Performance Peripherals to Maximize System Performance,” Texas Instruments, pp. 1-23, Jan. 2007.
Mogul, Jeffrey C., “The Case for Persistent-Connection HTTP,” ACM SIGCOMM Computer Communication Review, vol. 25, No. 4, pp. 299-313, Oct. 1995.
Moore et al., “Inferring Internet Denial-of-Service Activity,” University of California, pp. 1-14, 2000.
Rabinovich et al., “DHTTP: An Efficient and Cache-Friendly Transfer Protocol for the Web,” IEEE/ACM Transactions on Networking, vol. 12, No. 6, pp. 1007-1020, Dec. 2004.
Salchow, Jr., KJ, “Clustered Multiprocessing: Changing the Rules of the Performance Game,” F5 Networks, Inc., White Paper, pp. 1-11, 2008.
Stevens, W., “TCP Slow Start, Congestion Avoidance, Fast Retransmit, and Fast Recovery Algorithms,” Network Working Group, Request for Comments 2001, Standards Track, pp. 1-6, Jan. 1997.
Wadge, Wallace, “Achieving Gigabit Performance on Programmable Ethernet Network Interface Cards,” pp. 1-9, May 29, 2001.
Welch, Von, “A User's Guide to TCP Windows,” pp. 1-5, Jun. 19, 1996.
“Direct Memory Access,” from Wikipedia, pp. 1-6, Jan. 26, 2010.
“Nagle's Algorithm,” from Wikipedia, pp. 1-2, Oct. 9, 2009.
Provisional Applications (1)
Number Date Country
62642725 Mar 2018 US