Methods for service stitching using a packet header and devices thereof

Information

  • Patent Grant
  • 11044200
  • Patent Number
    11,044,200
  • Date Filed
    Monday, July 8, 2019
    4 years ago
  • Date Issued
    Tuesday, June 22, 2021
    3 years ago
Abstract
Methods, non-transitory computer readable media, network traffic manager apparatuses, and systems that assist with service stitching using a packet header includes identifying a type of service (TOS) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets. One or more value added service chains are identified based on the identified TOS or DSCP value. The plurality of network packets are forwarded to a destination after processing each of the plurality of network packets through the identified one or more value added service chains.
Description
FIELD

This technology generally relates to methods and devices for network traffic management and, more particularly, to methods for service stitching using packet header and devices thereof.


BACKGROUND

Network service providers provide network services, such as security, tunneling, virtual private networks, filtering, or load-balancing by way of example, to client devices. To provide these services, these network service providers typically use dedicated network traffic management devices configured to manage and provide network services using service chains. These service chains identify a set of the service functions to be applied to network packet flows in order to provide a particular network service. Accordingly, every time a network service is required, a set of functions are applied on the network packets associated with a client device. Unfortunately, applying the set of functions on each network packet adds latency to the overall flow of the network packet. Further, with this process of determining when to apply the set of functions, prior technologies have failed to take into account a previous trust that was established between the client and the network traffic management device.


SUMMARY

A method for service stitching using a packet header includes identifying a type of service (TO S) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets. One or more value added service chains are identified based on the identified TOS or DSCP value. The plurality of network packets are forwarded to a destination after processing each of the plurality of network packets through the identified one or more value added service chains.


A non-transitory computer readable medium having stored thereon instructions for service stitching using a packet header comprising machine executable code which when executed by at least one processor, causes the processor to identify a type of service (TOS) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets. One or more value added service chains are identified based on the identified TOS or DSCP value. The plurality of network packets are forwarded to a destination after processing each of the plurality of network packets through the identified one or more value added service chains.


A network traffic management apparatus including at least one of configurable hardware logic configured to be capable of implementing or a processor coupled to a memory and configured to execute programmed instructions stored in the memory to identify a type of service (TOS) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets. One or more value added service chains are identified based on the identified TOS or DSCP value. The plurality of network packets are forwarded to a destination after processing each of the plurality of network packets through the identified one or more value added service chains.


A network traffic management system, comprising one or more traffic management apparatuses, client devices, or server devices, the network traffic management system comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to identify a type of service (TOS) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets. One or more value added service chains are identified based on the identified TOS or DSCP value. The plurality of network packets are forwarded to a destination after processing each of the plurality of network packets through the identified one or more value added service chains.


This technology provides a number of advantages including providing a method, non-transitory computer readable medium, apparatus, and system that assist with service stitching using a packet header. By using the techniques illustrated below, the technology significantly reduces the latency associated with service chaining by selectively applying the service functions on the network packets.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is an example of a block diagram of a network traffic management system including a network traffic management apparatus for service stitching using a packet header;



FIG. 2 is an example of a block diagram of a network traffic management apparatus;



FIG. 3 is an exemplary flowchart of a method for service stitching using a packet header; and



FIG. 4 is an exemplary sequence diagram for service stitching using a packet header.





DETAILED DESCRIPTION

An example of a network environment 10 which incorporates a network traffic management system for service stitching using a packet header with the network traffic manager apparatus 14 is illustrated in FIGS. 1 and 2. The exemplary environment 10 includes a plurality of client computing devices 12(1)-12(n), a network traffic manager apparatus 14, and a plurality of servers 16(1)-16(n) which are coupled together by communication networks 30, although the environment can include other types and numbers of systems, devices, components, and/or elements and in other topologies and deployments. While not shown, the exemplary environment 10 may include additional network components, such as routers, switches and other devices, which are well known to those of ordinary skill in the art and thus will not be described here. This technology provides a number of advantages including providing methods, non-transitory computer readable media, and devices that manage a flow of a packet through a value added service chain without needing to repeatedly reclassify the flow.


Referring more specifically to FIGS. 1 and 2, the network traffic manager apparatus 14 of the network traffic management system is coupled to the plurality of client computing devices 12(1)-12(n) through the communication network 30, although the plurality of client computing devices 12(1)-12(n) and network traffic manager apparatus 14 may be coupled together via other topologies. Additionally, the network traffic manager apparatus 14 is coupled to the plurality of servers 16(1)-16(n) through the communication network 30, although the servers 16(1)-16(n) and the network traffic manager apparatus 14 may be coupled together via other topologies.


The network traffic manager apparatus 14 assists with service stitching using a packet header as illustrated and described by way of the examples herein, although the network traffic manager apparatus 14 may perform other types and/or numbers of functions. As illustrated in FIG. 2, the network traffic manager apparatus 14 includes processor or central processing unit (CPU) 18, memory 20, optional configurable hardware logic 21, and a communication system 24 which are coupled together by a bus device 26 although the network traffic manager apparatus 14 may comprise other types and numbers of elements in other configurations. In this example, the bus 26 is a PCI Express bus in this example, although other bus types and links may be used.


The processors 18 within the network traffic manager apparatus 14 may execute one or more computer-executable instructions stored in memory 20 for the methods illustrated and described with reference to the examples herein, although the processor can execute other types and numbers of instructions and perform other types and numbers of operations. The processor 18 may comprise one or more central processing units (“CPUs”) or general purpose processors with one or more processing cores, such as AMD® processor(s), although other types of processor(s) could be used (e.g., Intel®).


The memory 20 within the network traffic manager apparatus 14 may comprise one or more tangible storage media, such as RAM, ROM, flash memory, CD-ROM, floppy disk, hard disk drive(s), solid state memory, DVD, or any other memory storage types or devices, including combinations thereof, which are known to those of ordinary skill in the art. The memory 20 may store one or more non-transitory computer-readable instructions of this technology as illustrated and described with reference to the examples herein that may be executed by the processor 18. The exemplary flowchart shown in FIG. 3 is representative of example steps or actions of this technology that may be embodied or expressed as one or more non-transitory computer or machine readable instructions stored in the memory 20 that may be executed by the processor 18 and/or may be implemented by configured logic in the optional configurable logic 21.


Accordingly, the memory 20 of the network traffic manager apparatus 14 can store one or more applications that can include computer executable instructions that, when executed by the network traffic manager apparatus 14, causes the network traffic manager apparatus 14 to perform actions, such as to transmit, receive, or otherwise process messages, for example, and to perform other actions described and illustrated below with reference to FIG. 3. The application(s) can be implemented as module or components of another application. Further, the application(s) can be implemented as operating system extensions, module, plugins, or the like. The application(s) can be implemented as module or components of another application. Further, the application(s) can be implemented as operating system extensions, module, plugins, or the like. Even further, the application(s) may be operative in a cloud-based computing environment. The application(s) can be executed within virtual machine(s) or virtual server(s) that may be managed in a cloud-based computing environment. Also, the application(s), including the network traffic manager apparatus 14 itself, may be located in virtual server(s) running in a cloud-based computing environment rather than being tied to one or more specific physical network computing devices. Also, the application(s) may be running in one or more virtual machines (VMs) executing on the network traffic manager apparatus 14. Additionally, in at least one of the various embodiments, virtual machine(s) running on the network traffic manager apparatus 14 may be managed or supervised by a hypervisor.


The optional configurable hardware logic device 21 in the network traffic manager apparatus 14 may comprise specialized hardware configured to implement one or more steps of this technology as illustrated and described with reference to the examples herein. By way of example only, the optional configurable logic hardware device 21 may comprise one or more of field programmable gate arrays (“FPGAs”), field programmable logic devices (“FPLDs”), application specific integrated circuits (ASICs”) and/or programmable logic units (“PLUs”).


The communication system 24 in the network traffic manager apparatus 14 is used to operatively couple and communicate between the network traffic manager apparatus 14, the plurality of client computing devices 12(1)-12(n), and the plurality of servers 16(1)-16(n) which are all coupled together by communication network 30 such as one or more local area networks (LAN) and/or the wide area network (WAN), although other types and numbers of communication networks or systems with other types and numbers of connections and configurations to other devices and elements may be used. By way of example only, the communication network such as local area networks (LAN) and the wide area network (WAN) can use TCP/IP over Ethernet and industry-standard protocols, including NFS, CIFS, SOAP, XML, LDAP, and SNMP, although other types and numbers of communication networks, can be used. In this example, the bus 26 is a PCI Express bus in this example, although other bus types and links may be used.


Each of the plurality of client computing devices 12(1)-12(n) of the network traffic management system 10, include a central processing unit (CPU) or processor, a memory, input/display device interface, configurable logic device and an input/output system or I/O system, which are coupled together by a bus or other link. The plurality of client computing devices 12(1)-12(n), in this example, may run interface applications, such as Web browsers, that may provide an interface to make requests for and send and/or receive data to and/or from the web application servers 16(1)-16(n) via the network traffic manager apparatus 14. Additionally, the plurality of client computing devices 12(1)-12(n) can include any type of computing device that can receive, render, and facilitate user interaction, such as client computers, network computer, mobile computers, mobile phones, virtual machines (including cloud-based computer), or the like. Each of the plurality of client computing devices 12(1)-12(n) utilizes the network traffic manager apparatus 14 to conduct one or more operations with the web application servers 16(1)-16(n), such as to obtain data and/or access the applications from one of the web application servers 16(1)-16(n), by way of example only, although other numbers and/or types of systems could be utilizing these resources and other types and numbers of functions utilizing other types of protocols could be performed.


Each of the plurality of servers 16(1)-16(n) of the network traffic management system include a central processing unit (CPU) or processor, a memory, and a communication system, which are coupled together by a bus or other link, although other numbers and/or types of network devices could be used. Generally, the plurality of servers 16(1)-16(n) process requests for providing access to one or more enterprise web applications received from the plurality of client computing devices 12(1)-12(n), network traffic manager apparatus 14, via the communication network 30 according to the HTTP-based application RFC protocol or the CIF S or NFS protocol in this example, but the principles discussed herein are not limited to this example and can include other application protocols. A series of applications may run on the plurality web application servers 16(1)-16(n) that allows the transmission of applications requested by the plurality of client computing devices 12(1)-12(n), or the network traffic manager apparatus 14. The plurality of servers 16(1)-16(n) may provide data or receive data in response to requests directed toward the respective applications on the plurality of servers 16(1)-16(n) from the plurality of client computing devices 12(1)-12(n) or the network traffic manager apparatus 14. It is to be understood that the plurality of servers 16(1)-16(n) may be hardware or software or may represent a system with multiple external resource servers, which may include internal or external networks. In this example the plurality of servers 16(1)-16(n) may be any version of Microsoft® IIS servers or Apache® servers, although other types of servers may be used.


Although the plurality of servers 16(1)-16(n) are illustrated as single servers, each of the plurality of servers 16(1)-16(n) may be distributed across one or more distinct network computing devices. Moreover, the plurality of servers 16(1)-16(n) are not limited to a particular configuration. Thus, the plurality of plurality web application servers 16(1)-16(n) may contain a plurality of network computing devices that operate using a master/slave approach, whereby one of the network computing devices of the plurality of servers 16(1)-16(n) operate to manage and/or otherwise coordinate operations of the other network computing devices. The plurality of servers 16(1)-16(n) may operate as a plurality of network computing devices within cluster architecture, a peer-to peer architecture, virtual machines, or within a cloud architecture.


Thus, the technology disclosed herein is not to be construed as being limited to a single environment and other configurations and architectures are also envisaged. For example, the one or more of the plurality of servers 16(1)-16(n) depicted in FIG. 1 can operate within network traffic manager apparatus 14 rather than as a stand-alone server communicating with network traffic manager apparatus 14 via the communication network(s) 30. In this example the plurality of servers 16(1)-16(n) operate within the memory 20 of the network traffic manager apparatus 14.


While the network traffic manager apparatus 14 is illustrated in this example as including a single device, the network traffic manager apparatus 14 in other examples can include a plurality of devices or blades each with one or more processors each processor with one or more processing cores that implement one or more steps of this technology. In these examples, one or more of the devices can have a dedicated communication interface or memory. Alternatively, one or more of the devices can utilize the memory, communication interface, or other hardware or software components of one or more other communicably coupled of the devices. Additionally, one or more of the devices that together comprise network traffic manager apparatus 14 in other examples can be standalone devices or integrated with one or more other devices or applications, such as one of the plurality of servers 16(1)-16(n) or, the network traffic manager apparatus 14, or applications coupled to the communication network(s), for example. Moreover, one or more of the devices of the network traffic manager apparatus 14 in these examples can be in a same or a different communication network 30 including one or more public, private, or cloud networks, for example.


Although an exemplary network traffic management system 10 with the plurality of client computing devices 12(1)-12(n), the network traffic manager apparatus 14, and the plurality of servers 16(1)-16(n), communication networks 30 are described and illustrated herein, other types and numbers of systems, devices, blades, components, and elements in other topologies can be used. It is to be understood that the systems of the examples described herein are for exemplary purposes, as many variations of the specific hardware and software used to implement the examples are possible, as will be appreciated by those skilled in the relevant art(s).


Further, each of the systems of the examples may be conveniently implemented using one or more general purpose computer systems, microprocessors, digital signal processors, and micro-controllers, programmed according to the teachings of the examples, as described and illustrated herein, and as will be appreciated by those of ordinary skill in the art.


One or more of the components depicted in the network traffic management system, such as the network traffic manager apparatus 14, the plurality of client computing devices 12(1)-12(n), the plurality of servers 16(1)-16(n), for example, may be configured to operate as virtual instances on the same physical machine. In other words, one or more of network traffic manager apparatus 14, the plurality of client computing devices 12(1)-12(n), or the plurality of servers 16(1)-16(n) illustrated in FIG. 1 may operate on the same physical device rather than as separate devices communicating through a network as depicted in FIG. 1. There may be more or fewer plurality of client computing devices 12(1)-12(n), network traffic manager apparatus 14, or the plurality of servers 16(1)-16(n) than depicted in FIG. 1. The plurality of client computing devices 12(1)-12(n), the plurality of servers 16(1)-16(n) could be implemented as applications on network traffic manager apparatus 14.


In addition, two or more computing systems or devices can be substituted for any one of the systems or devices in any example. Accordingly, principles and advantages of distributed processing, such as redundancy and replication also can be implemented, as desired, to increase the robustness and performance of the devices and systems of the examples. The examples may also be implemented on computer system(s) that extend across any suitable network using any suitable interface mechanisms and traffic technologies, including by way of example only teletraffic in any suitable form (e.g., voice and modem), wireless traffic media, wireless traffic networks, cellular traffic networks, G3 traffic networks, Public Switched Telephone Network (PSTNs), Packet Data Networks (PDNs), the Internet, intranets, and combinations thereof.


The examples also may be embodied as a non-transitory computer readable medium having instructions stored thereon for one or more aspects of the technology as described and illustrated by way of the examples herein, which when executed by a processor (or configurable hardware), cause the processor to carry out the steps necessary to implement the methods of the examples, as described and illustrated herein.


An example of a method for service stitching using a packet header will now be described with reference to FIGS. 1-4. First in step 305, the network traffic manager apparatus 14 receives a plurality of network packets from one of the plurality of client devices 12(1)-12(n) directed to the one of the plurality of servers 16(1)-16(n), although the network traffic manager apparatus 14 can receive other types or amounts of information and from other sources. In this example, the network traffic manager apparatus 14 receives the plurality of network packets through a virtual local area network (VLAN), although the network traffic manager apparatus 14 can receive the plurality of network packets through other types of network connections.


In step 310, the network traffic manager apparatus 14 identifies a type of application and data associated with the requesting one of the plurality of client computing devices 12(1)-12(n). In this example, the type of application and the data associated with the requesting one of the plurality of client computing devices 12(1)-12(n) can be identified through the header and payload data of the received plurality of packets, although the network traffic manager apparatus 14 can identify the type of application and the data associated with the requesting one of the plurality of client computing devices 12(1)-12(n) using other types of techniques.


Next in step 315, the network traffic manager apparatus 14 determines if each of the received plurality of network packets are related to a flow of network packets associated with a previously established VLAN between the requesting one of the plurality of client computing devices 12(1)-12(n) based on the identified type of application and the client device data. Alternatively, the network traffic manager apparatus 14 can determine if the received plurality of network packets relate to a previously established VLAN using other types of parameters and techniques. Accordingly, when the network traffic manager apparatus 14 determines that the received plurality of network packets are associated with the previously established VLAN, then the Yes branch is taken to step 320.


In step 320, the network traffic manager apparatus 14 identifies the type of service (TOS) or the differentiated service code point (DSCP) value in the header of the received plurality of network packets. In this example, the TOS or DSCP value is a value that was previously encoded by the network traffic manager apparatus 14 on the previously established VLAN. Further, in this example the TOS or DSCP value corresponds to a value added service chain that the received plurality of packets belongs to. Although in this example the network traffic manager apparatus 14 uses the TOS or DSCP value, the network traffic manager apparatus 14 can use other techniques to determine the value added service chain of each of the plurality of network packets.


Next in step 325, the network traffic manager apparatus 14 processes each of the received plurality of network packets through a sequence of value added service chains based on the identified TOS/DSCP value. In this example, the TOS/DSCP value in the header of each of the network packets would remain unchanged when it is processed by each of the value added services and by using this technique, the network traffic manager apparatus 14 does not have to compute a new TOS/DSCP value for a new value added service chain. Additionally in this example, the network traffic manager apparatus 14 can use the previously used VLAN, the last completed value added service along with the TOS/DSCP value to identify and process the received plurality of packets through the value added service chain.


Next in step 330, the network traffic manager apparatus 14 forwards each of the received plurality of network packets that has been processed through the value added service chain to the corresponding one of the plurality of servers 16(1)-16(n).


If back in step 315 the network traffic manager apparatus 14 determines that the received plurality of network packets are not related to a previous VLAN connection, then the No branch is taken to step 335.


In step 335, the network traffic manager apparatus 14 obtains a policy and the subscription data associated with the requesting one of the plurality of client devices 12(1)-12(n) and the type of application being accessed. In this example, the memory 20 of the network traffic manager apparatus 14 includes a table with the data associated with the requesting one of the plurality of client computing devices 12(1)-12(n) and the type of application being accessed and the corresponding subscription and the policies. Additionally in this example, the obtained subscription and the policy data includes data associated with the value added service chain that the received plurality of network packets has to be processed, although the subscription and the policy data can include other types and/or amounts of information.


In step 340, the network traffic manager apparatus 14 computes a unique value for each of the received plurality of packets based on the obtained policy and the subscription data, although the network traffic manager apparatus 14 can compute the unique value for each of the received plurality of packets using different techniques.


In step 345, the network traffic manager apparatus 14 encodes the computed unique value in the TOS/DSCP header field for each of the received plurality of network packets and the exemplary flow proceeds to step 325 as described above.


In another example, the network traffic manager apparatus 14 can translate the source port to a network address translation (NAT) port and encode the unique value associated with the value added service chain to the source port field of the header. In other words, the network traffic manager apparatus 14 can identify the value added service chain associated with the received plurality of network packets based on the value encoded in the source port field of the header and can then accordingly process the packets through the value added service chain.


Having thus described the basic concept of the technology, it will be rather apparent to those skilled in the art that the foregoing detailed disclosure is intended to be presented by way of example only, and is not limiting. Various alterations, improvements, and modifications will occur and are intended to those skilled in the art, though not expressly stated herein. These alterations, improvements, and modifications are intended to be suggested hereby, and are within the spirit and scope of the technology. Additionally, the recited order of processing elements or sequences, or the use of numbers, letters, or other designations therefore, is not intended to limit the claimed processes to any order except as may be specified in the claims. Accordingly, the technology is limited only by the following claims and equivalents thereto.

Claims
  • 1. A method for service stitching using a packet header, the method implemented by a network traffic management system comprising one or more network traffic apparatuses, client devices, or server devices, the method comprising: identifying a type of service (TOS) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets;identifying a value added service chain based on the identified TOS or DSCP value, the identifying further comprising computing a unique value indicative of the identified value added service chain for each of the received plurality of network packets based on obtained policy data and subscription data and encoding the computed unique value into the identified TOS or DSCP header field for each of the received plurality of network packets; andforwarding the plurality of network packets to a destination after processing each of the plurality of network packets through the identified value added service chain based on the encoded unique value.
  • 2. The method as set forth in claim 1 further comprising, obtaining requested application data and client data from each of the received plurality of network packets.
  • 3. The method as set forth in claim 1 further comprising, determining when each of the received plurality of network packets are associated with an existing network connection.
  • 4. The method as set forth in claim 2 further comprising, obtaining the policy data and the subscription data associated with the application data and the client data when any of the received plurality of network packets are unassociated with any existing network connection.
  • 5. A non-transitory computer readable medium having stored thereon instructions for service stitching using a packet header comprising executable code which when executed by one or more processors, causes the processors to: identify a type of service (TOS) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets;identify a value added service chain based on the identified TOS or DSCP value, the identifying further comprising computing a unique value indicative of the identified value added service chain for each of the received plurality of network packets based on obtained policy data and subscription data and encoding the computed unique value into the identified TOS or DSCP header field for each of the received plurality of network packets; andforward the plurality of network packets to a destination after processing each of the plurality of network packets through the identified value added service chain based on the encoded unique value.
  • 6. The medium as set forth in claim 5 further comprising, obtain requested application data and client data from each of the received plurality of network packets.
  • 7. The medium as set forth in claim 6 further comprising, determine when each of the received plurality of network packets are associated with an existing network connection.
  • 8. The medium as set forth in claim 6 further comprising, obtain the policy data and the subscription data associated with the application data and the client data when any of the received plurality of network packets are unassociated with any existing network connection.
  • 9. A network traffic manager apparatus, comprising memory comprising programmed instructions stored in the memory and one or more processors configured to be capable of executing the programmed instructions stored in the memory to: identify a type of service (TOS) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets;identify a value added service chain based on the identified TOS or DSCP value, the identifying further comprising computing a unique value indicative of the identified value added service chain for each of the received plurality of network packets based on obtained policy data and subscription data and encoding the computed unique value into the identified TOS or DSCP header field for each of the received plurality of network packets; andforward the plurality of network packets to a destination after processing each of the plurality of network packets through the identified value added service chain based on the encoded unique value.
  • 10. The apparatus as set forth in claim 9 wherein the one or more processors are further configured to be capable of executing the programmed instructions stored in the memory to obtain requested application data and client data from each of the received plurality of network packets.
  • 11. The apparatus as set forth in claim 9 wherein the one or more processors are further configured to be capable of executing the programmed instructions stored in the memory to determine when each of the received plurality of network packets are associated with an existing network connection.
  • 12. The apparatus as set forth in claim 10 wherein the one or more processors are further configured to be capable of executing the programmed instructions stored in the memory to obtain the policy data and the subscription data associated with the application data and the client data when any of the received plurality of network packets are unassociated with any existing network connection.
  • 13. A network traffic management system, comprising one or more traffic management apparatuses, client devices, or server devices, the network traffic management system comprising memory comprising programmed instructions stored thereon and one or more processors configured to be capable of executing the stored programmed instructions to: identify a type of service (TOS) or differentiated services code point (DSCP) value in a header field in each of a plurality of received network packets;identify a value added service chain based on the identified TOS or DSCP value, the identifying further comprising computing a unique value indicative of the identified value added service chain for each of the received plurality of network packets based on obtained policy data and subscription data, and encoding the computed unique value into the identified TOS or DSCP header field for each of the received plurality of network packets; andforward the plurality of network packets to a destination after processing each of the plurality of network packets through the identified value added service chain based on the encoded unique value.
  • 14. The network traffic management system of claim 13, wherein the one or more processors are further configured to be capable of executing the programmed instructions stored in the memory to obtain requested application data and client data from each of the received plurality of network packets.
  • 15. The network traffic management system of claim 13, wherein the one or more processors are further configured to be capable of executing the programmed instructions stored in the memory to determine when each of the received plurality of network packets are associated with an existing network connection.
  • 16. The network traffic management system of claim 14, wherein the one or more processors are further configured to be capable of executing the programmed instructions stored in the memory to obtain the policy data and the subscription data associated with the application data and the client data when any of the received plurality of network packets are unassociated with any existing network connection.
  • 17. The method as set forth in claim 1 further comprising: encoding the computed unique value indicative of the identified value added service chain into a source port field of each of the received plurality of network packets; andprocessing each of the received plurality of network packets through the identified value added service chain based on the encoded unique value in the source port field.
  • 18. The medium as set forth in claim 5 further comprising: encoding the computed unique value indicative of the identified value added service chain into a source port field of each of the received plurality of network packets; andprocessing each of the received plurality of network packets through the identified value added service chain based on the encoded unique value in the source port field.
  • 19. The apparatus as set forth in claim 9 wherein the one or more processors are further configured to be capable of executing the programmed instructions stored in the memory to: encode the computed unique value indicative of the identified value added service chain into a source port field of each of the received plurality of network packets; andprocess each of the received plurality of network packets through the identified value added service chain based on the encoded unique value in the source port field.
  • 20. The network traffic management system of claim 13, wherein the one or more processors are further configured to be capable of executing the programmed instructions stored in the memory to: encode the computed unique value indicative of the identified value added service chain into a source port field of each of the received plurality of network packets; andprocess each of the received plurality of network packets through the identified value added service chain based on the encoded unique value in the source port field.
Parent Case Info

This application claims the benefit of U.S. Provisional Patent Application Ser. No. 62/694,636, filed Jul. 6, 2018, which is hereby incorporated by reference in its entirety.

US Referenced Citations (258)
Number Name Date Kind
3950735 Patel Apr 1976 A
4644532 George et al. Feb 1987 A
4897781 Chang et al. Jan 1990 A
4965772 Daniel et al. Oct 1990 A
5023826 Patel Jun 1991 A
5053953 Patel Oct 1991 A
5299312 Rocco, Jr. Mar 1994 A
5327529 Fults et al. Jul 1994 A
5367635 Bauer et al. Nov 1994 A
5371852 Attanasio et al. Dec 1994 A
5406502 Haramaty et al. Apr 1995 A
5475857 Daily Dec 1995 A
5517617 Sathaye et al. May 1996 A
5519694 Brewer et al. May 1996 A
5519778 Leighton et al. May 1996 A
5521591 Arora et al. May 1996 A
5528701 Aref Jun 1996 A
5581764 Fitzgerald et al. Dec 1996 A
5596742 Agarwal et al. Jan 1997 A
5606665 Yang et al. Feb 1997 A
5611049 Pitts Mar 1997 A
5663018 Cummings et al. Sep 1997 A
5752023 Choucri et al. May 1998 A
5761484 Agarwal et al. Jun 1998 A
5768423 Aref et al. Jun 1998 A
5774660 Brendel et al. Jun 1998 A
5790554 Pitcher et al. Aug 1998 A
5802052 Venkataraman Sep 1998 A
5812550 Sohn et al. Sep 1998 A
5815516 Aaker et al. Sep 1998 A
5825772 Dobbins et al. Oct 1998 A
5826032 Finn et al. Oct 1998 A
5875296 Shi et al. Feb 1999 A
5892914 Pitts Apr 1999 A
5892932 Kim Apr 1999 A
5919247 Van Hoff et al. Jul 1999 A
5936939 Des Jardins et al. Aug 1999 A
5941988 Bhagwat et al. Aug 1999 A
5946690 Pitts Aug 1999 A
5949885 Leighton Sep 1999 A
5951694 Choquier et al. Sep 1999 A
5959990 Frantz et al. Sep 1999 A
5974460 Maddalozzo, Jr. et al. Oct 1999 A
5983281 Ogle et al. Nov 1999 A
5988847 McLaughlin et al. Nov 1999 A
6006260 Barrick, Jr. et al. Dec 1999 A
6006264 Colby et al. Dec 1999 A
6026452 Pitts Feb 2000 A
6028857 Poor Feb 2000 A
6051169 Brown et al. Apr 2000 A
6078956 Bryant et al. Jun 2000 A
6085234 Pitts et al. Jul 2000 A
6092196 Reiche Jul 2000 A
6108703 Leighton et al. Aug 2000 A
6111876 Frantz et al. Aug 2000 A
6128279 O'Neil et al. Oct 2000 A
6128657 Okanoya et al. Oct 2000 A
6170022 Linville et al. Jan 2001 B1
6178423 Douceur et al. Jan 2001 B1
6182139 Brendel Jan 2001 B1
6192051 Lipman et al. Feb 2001 B1
6233612 Fruchtman et al. May 2001 B1
6246684 Chapman et al. Jun 2001 B1
6253226 Chidambaran et al. Jun 2001 B1
6253230 Couland et al. Jun 2001 B1
6263368 Martin Jul 2001 B1
6289012 Harrington et al. Sep 2001 B1
6298380 Coile et al. Oct 2001 B1
6327622 Jindal et al. Dec 2001 B1
6343324 Hubis et al. Jan 2002 B1
6347339 Morris et al. Feb 2002 B1
6360270 Cherkasova et al. Mar 2002 B1
6374300 Masters Apr 2002 B2
6396833 Zhang et al. May 2002 B1
6430562 Kardos et al. Aug 2002 B1
6434081 Johnson et al. Aug 2002 B1
6442687 Savage Aug 2002 B1
6484261 Wiegel Nov 2002 B1
6490624 Sampson et al. Dec 2002 B1
6510135 Almulhem et al. Jan 2003 B1
6510458 Berstis et al. Jan 2003 B1
6519643 Foulkes et al. Feb 2003 B1
6601084 Bhaskaran et al. Jul 2003 B1
6636503 Shiran et al. Oct 2003 B1
6636894 Short et al. Oct 2003 B1
6650640 Muller et al. Nov 2003 B1
6650641 Albert et al. Nov 2003 B1
6654701 Hatley Nov 2003 B2
6683873 Kwok et al. Jan 2004 B1
6691165 Bruck et al. Feb 2004 B1
6708187 Shanumgam et al. Mar 2004 B1
6742045 Albert et al. May 2004 B1
6751663 Farrell et al. Jun 2004 B1
6754228 Ludwig Jun 2004 B1
6760775 Anerousis et al. Jul 2004 B1
6772219 Shobatake Aug 2004 B1
6779039 Bommareddy et al. Aug 2004 B1
6781986 Sabaa et al. Aug 2004 B1
6798777 Ferguson et al. Sep 2004 B1
6807173 Lee et al. Oct 2004 B1
6816901 Sitaraman et al. Nov 2004 B1
6829238 Tokuyo et al. Dec 2004 B2
6868082 Allen, Jr. et al. Mar 2005 B1
6876629 Beshai et al. Apr 2005 B2
6876654 Hegde Apr 2005 B1
6888836 Cherkasova May 2005 B1
6915344 Rowe et al. Jul 2005 B1
6928082 Liu et al. Aug 2005 B2
6950434 Viswanath et al. Sep 2005 B1
6954780 Susai et al. Oct 2005 B2
6957272 Tallegas et al. Oct 2005 B2
6975592 Seddigh et al. Dec 2005 B1
6987763 Rochberger et al. Jan 2006 B2
7007092 Peiffer Feb 2006 B2
7111039 Warren et al. Sep 2006 B2
7113993 Cappiello et al. Sep 2006 B1
7139792 Mishra et al. Nov 2006 B1
7152111 Allred et al. Dec 2006 B2
7228422 Morioka et al. Jun 2007 B2
7283470 Sindhu et al. Oct 2007 B1
7287082 O'Toole, Jr. Oct 2007 B1
7308703 Wright et al. Dec 2007 B2
7321926 Zhang et al. Jan 2008 B1
7333999 Njemanze Feb 2008 B1
7343413 Gilde et al. Mar 2008 B2
7349391 Ben-Dor et al. Mar 2008 B2
7350040 Marinescu Mar 2008 B2
7398552 Pardee et al. Jul 2008 B2
7454480 Labio et al. Nov 2008 B2
7483871 Herz Jan 2009 B2
7490162 Masters Feb 2009 B1
7496962 Roelker et al. Feb 2009 B2
7500269 Huotari et al. Mar 2009 B2
7516217 Yodaiken Apr 2009 B2
7526541 Roese et al. Apr 2009 B2
7558197 Sindhu et al. Jul 2009 B1
7580971 Gollapudi et al. Aug 2009 B1
7624424 Morita et al. Nov 2009 B2
7668166 Rekhter et al. Feb 2010 B1
7706261 Sun et al. Apr 2010 B2
7724657 Rao et al. May 2010 B2
7801978 Susai et al. Sep 2010 B1
7844717 Herz et al. Nov 2010 B2
7876677 Cheshire Jan 2011 B2
7908314 Yamaguchi et al. Mar 2011 B2
8060629 Krawetz Nov 2011 B2
8130650 Allen, Jr. et al. Mar 2012 B2
8199757 Pani et al. Jun 2012 B2
8351333 Rao et al. Jan 2013 B2
8352618 Perez Jan 2013 B2
8380854 Szabo Feb 2013 B2
8447871 Szabo May 2013 B1
8819819 Johnston et al. Aug 2014 B1
9014184 Lwata et al. Apr 2015 B2
9386037 Hunt et al. Jul 2016 B1
20010023442 Masters Sep 2001 A1
20020059428 Susai et al. May 2002 A1
20020138615 Schmeling Sep 2002 A1
20020161913 Gonzalez et al. Oct 2002 A1
20020198993 Cudd et al. Dec 2002 A1
20030046291 Fascenda Mar 2003 A1
20030070069 Belapurkar et al. Apr 2003 A1
20030086415 Bernhard et al. May 2003 A1
20030108052 Inoue et al. Jun 2003 A1
20030140121 Adams Jul 2003 A1
20030145062 Sharma Jul 2003 A1
20030145233 Poletto et al. Jul 2003 A1
20030225485 Fritz et al. Dec 2003 A1
20030225897 Krawetz Dec 2003 A1
20040003287 Zissimopoulos et al. Jan 2004 A1
20040006590 Lucovsky et al. Jan 2004 A1
20040093372 Chen et al. May 2004 A1
20040098495 Warren et al. May 2004 A1
20040103283 Hornak May 2004 A1
20040111463 Amon et al. Jun 2004 A1
20040117493 Bazot et al. Jun 2004 A1
20040148328 Matsushima Jul 2004 A1
20040267920 Hydrie et al. Dec 2004 A1
20040268358 Darling et al. Dec 2004 A1
20050004887 Igakura et al. Jan 2005 A1
20050021736 Carusi et al. Jan 2005 A1
20050044213 Kobayashi et al. Feb 2005 A1
20050052440 Kim et al. Mar 2005 A1
20050055435 Gbadegesin et al. Mar 2005 A1
20050108578 Tajalli et al. May 2005 A1
20050122977 Lieberman Jun 2005 A1
20050154837 Keohane et al. Jul 2005 A1
20050187866 Lee Aug 2005 A1
20050188220 Nilsson et al. Aug 2005 A1
20050198099 Motsinger et al. Sep 2005 A1
20050262238 Reeves et al. Nov 2005 A1
20060021004 Moran et al. Jan 2006 A1
20060031520 Bedekar et al. Feb 2006 A1
20060059267 Cugi et al. Mar 2006 A1
20060095585 Meijs et al. May 2006 A1
20060095956 Ashley et al. May 2006 A1
20060156416 Huotari et al. Jul 2006 A1
20060161577 Kulkarni et al. Jul 2006 A1
20060171365 Borella Aug 2006 A1
20060184647 Dixit et al. Aug 2006 A1
20060233106 Achlioptas et al. Oct 2006 A1
20060233155 Srivastava Oct 2006 A1
20060242300 Yumoto et al. Oct 2006 A1
20060265510 Warren et al. Nov 2006 A1
20070016662 Desai et al. Jan 2007 A1
20070064661 Sood et al. Mar 2007 A1
20070107048 Halls et al. May 2007 A1
20070118879 Yeun May 2007 A1
20070174491 Still Jul 2007 A1
20070220598 Salowey et al. Sep 2007 A1
20070297551 Choi Dec 2007 A1
20080034136 Ulenas Feb 2008 A1
20080072303 Syed Mar 2008 A1
20080133518 Kapoor et al. Jun 2008 A1
20080134311 Medvinsky et al. Jun 2008 A1
20080148340 Powell Jun 2008 A1
20080201599 Ferraiolo et al. Aug 2008 A1
20080256224 Kaji et al. Oct 2008 A1
20080301760 Lim Dec 2008 A1
20090028337 Balabine et al. Jan 2009 A1
20090031054 Kato Jan 2009 A1
20090049230 Pandya Feb 2009 A1
20090119504 van Os et al. May 2009 A1
20090125625 Shim May 2009 A1
20090138749 Moll et al. May 2009 A1
20090141891 Boyen et al. Jun 2009 A1
20090228956 He et al. Sep 2009 A1
20090287935 Aull et al. Nov 2009 A1
20100071048 Novak et al. Mar 2010 A1
20100122091 Huang et al. May 2010 A1
20100138551 Degaonkar et al. Jun 2010 A1
20100150154 Viger et al. Jun 2010 A1
20100242092 Harris et al. Sep 2010 A1
20100251330 Kroeselberg et al. Sep 2010 A1
20100325277 Muthiah et al. Dec 2010 A1
20110040889 Garrett et al. Feb 2011 A1
20110047620 Mahaffey et al. Feb 2011 A1
20110066718 Susai et al. Mar 2011 A1
20110137973 Wei et al. Jun 2011 A1
20110153827 Yengalasetti et al. Jun 2011 A1
20110153831 Mutnuru et al. Jun 2011 A1
20110173295 Bakke et al. Jul 2011 A1
20110273984 Hsu et al. Nov 2011 A1
20110282997 Prince et al. Nov 2011 A1
20110321122 Mwangi et al. Dec 2011 A1
20120066489 Ozaki et al. Mar 2012 A1
20120246637 Kreeger et al. Sep 2012 A1
20130086195 Riniker Apr 2013 A1
20130179985 Strassmann et al. Jul 2013 A1
20130346492 Wang et al. Dec 2013 A1
20140207926 Benny Jul 2014 A1
20140237241 Kurosawa et al. Aug 2014 A1
20140281535 Kane et al. Sep 2014 A1
20160014094 Kurabayashi Jan 2016 A1
20160051199 Tishutin et al. Feb 2016 A1
20160212101 Reshadi et al. Jul 2016 A1
20170048245 Owen et al. Feb 2017 A1
20170105142 Hecht Apr 2017 A1
Foreign Referenced Citations (13)
Number Date Country
0744850 Nov 1996 EP
1991014326 Sep 1991 WO
1995005712 Feb 1995 WO
1997009805 Mar 1997 WO
1997045800 Dec 1997 WO
1999005829 Feb 1999 WO
1999006913 Feb 1999 WO
1999010858 Mar 1999 WO
1999039373 Aug 1999 WO
1999064967 Dec 1999 WO
2000004422 Jan 2000 WO
2000004458 Jan 2000 WO
2011079198 Jun 2011 WO
Non-Patent Literature Citations (24)
Entry
“BIG-IP® Local Traffic Manager™: Monitors Reference”, Nov. 13, 2017, pp. 1-134, version 13.1, F5 Networks, Inc.
“Application Layer Processing (ALP),” Crescendo Networks, Chapter 9, pp. 168-186, 2003-2009.
“A Process for Selective Routing of Servlet Content to Transcoding Modules,” Research Disclosure 422124, IBM Corporation, pp. 889-890, Jun. 1999.
“BIG-IP Controller With Exclusive OneConnect Content Switching Features Provides a Breakthrough System For Maximizing Server and Network Performance,” Press Release, F5 Networks, Inc., Las Vegas, Nevada, 2 pp, May 8, 2001.
Fielding et al., “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, Request for Comments 2068, Category Standards Track, pp. 1-162, Jan. 1997.
Fielding et al., “Hypertext Transfer Protocol—HTTP/1.1,” Network Working Group, Request for Comments 2616, Obsoletes 2068, Category Standards Track, pp. 1-176, Jun. 1999.
Floyd et al., “Random Early Detection Gateways for Congestions Avoidance,” IEEE/ACM Transactions on Networking, pp. 1-22, Aug. 1993.
Hochmuth, Phil. “F5, CacheFlow Pump Up Content-Delivery Lines,” Network World Fusion, 1 pp, May 4, 2001.
“Servlet/Applet/HTML Authentication Process With Single Sign-On,” Research Disclosure 429128, IBM Corportion, pp. 163-164. Jan. 2000.
“Traffic Surges: Attack or Legitimate,” Powerpoint Presentation, Slides 1-12, Citrix Systems, Inc., 2005.
Macvittie, Lori, “Message-Based Load Balancing,” F5 Networks, Inc., Technical Brief, pp. 1-9, 2010.
Abad et al., “An Analysis on the Schemes for Detecting and Preventing ARP Cache Poisoning Attacks,” 27th International Conference on Distributed Computing Systems Workshops, IEEE Computer Society, pp. 1-8, Jun. 22-29, 2007.
“Testing for Cross Site Scripting,” OWSAP Testing Guide, Version 2, pp. 1-5, Jul. 27, 2011.
“Principal Names and DNS,” MIT Kerberos Documentation, Kerberos Consortium, pp. 1-3, Jan. 1, 1999 (retrieved on Jun. 19, 2013).
Zhu et al, “Generating KDC Referrals to Locate Kerberos Realms,” Network Working Group, Internet-Draft, Obsoletes 2478 (if approved), Microsoft Corporation, pp. 1-17, Oct. 25, 2004.
International Search Report and Written Opinion for International Application No. PCT/US/2013/026615, filed Feb. 19, 2013, 10 pp, dated Jul. 4, 2013.
“Configuration Guide for Local Traffic Management,” Version 9.2.2, Publication No. MAN-0182-01, F5 Networks, Inc., pp. 1-406, Jan. 12, 2006.
Leffler et al., “Trailer Encapsulations,” Network Working Group, Request for Comments 893, pp. 1-6, Apr. 1984.
Braden, R., “Requirements for Internet Hosts—Communication Layers,” Network Working Group, Request for comments 1122, Internet Engineering Task Force, Oct. 1989.
“BIG-IP® Application Security Manager™: Implementations,” Version 11.6, Publication No. MAN-0358-07, F5 Networks, Inc., pp. 1-420, Aug. 20. 2014.
Murphy, Alan, “Managing IPv6 Throughout the Application Delivery Network,” F5 Networks, Inc., White Paper, pp. 1-10, Jun. 5, 2012.
“Managing IPv6 in Service Provider Networks With BIG-IP Devices,” Solution Profile/Service Provider, F5 Networks, Inc., 2 pp, 2010.
“Carrier-Grade NAT and IPv6 Gateway,” Solution Profile/IPv4 and IPv6 Solutions, F5 Networks, Inc., 2 pp, 2012.
“VMware Virtual Networking Concepts,” Information Guide, Item: IN-18-INF-01-01, pp. 1-12, Jul. 18, 2007.
Provisional Applications (1)
Number Date Country
62694636 Jul 2018 US