Offline, intelligent load balancing of SCTP traffic

Information

  • Patent Grant
  • 12267241
  • Patent Number
    12,267,241
  • Date Filed
    Tuesday, May 4, 2021
    4 years ago
  • Date Issued
    Tuesday, April 1, 2025
    2 months ago
Abstract
Techniques for enabling offline, intelligent load balancing of Stream Control Transmission Protocol (SCTP) traffic are provided. According to one embodiment, a load balancer can receive one or more SCTP packets that have been replicated from a network being monitored. The load balancer can further recover an SCTP message from the one or more SCTP packets and can map the SCTP message to an egress port based on one or more parameters decoded from the SCTP message and one or more rules. The load balancer can then transmit the SCTP message out of the egress port towards an analytic probe or tool for analysis.
Description
CROSS-REFERENCES TO RELATED APPLICATIONS

The present application claims the benefit and priority of India Provisional Application No. 201641010295, filed Mar. 24, 2016, entitled “SYSTEM AND METHOD FOR OFFLINE LOAD BALANCING OF SCTP PROTOCOL TRAFFIC” and U.S. Pat. No. 10,999,200, filed Oct. 27, 2016, entitled “OFFLINE, INTELLIGENT LOAD BALANCING OF SCTP TRAFFIC.” The entire contents of these applications are incorporated herein by reference in their entirety for all purposes.


BACKGROUND

As known in the field of computer networking, a visibility network (sometimes referred to as a visibility fabric) is a type of network that facilitates the monitoring and analysis of traffic flowing through another network (referred to herein as a “core” network). The purposes of deploying a visibility network are varied and can include management/optimization of the core network, security monitoring of the core network, business intelligence/reporting, compliance validation, and so on.



FIG. 1 depicts an example visibility network 100 according to an embodiment. As shown, visibility network 100 includes a number of taps 102 that are deployed within a core network 104. Taps 102 are configured to replicate traffic that is exchanged between network elements in core network 104 and forward the replicated traffic to a packet broker 106 (note that, in addition to or in lieu of taps 102, one or more routers or switches in core network 104 can be tasked to replicate and forward traffic to packet broker 106 using their respective SPAN or mirror functions). Packet broker 106 can perform various packet processing functions on the traffic received from taps 102, such as removing protocol headers, filtering/classifying/correlating packets based on configured rules, and so on. Packet broker 106 can then transmit the processed traffic to one or more analytic probes/tools 108, which can carry out various types of calculations and analyses on the traffic in accordance with the business goals/purposes of visibility network 100 (e.g., calculation of key performance indicators (KPIs), detection of security threats/attacks in core network 104, generation of reports, etc.).


In cases where a single probe/tool 108 does not have sufficient capacity (e.g., compute capacity, memory capacity, storage capacity, etc) to analyze the entirety of the traffic volume replicated from core network 104, packet broker 106 can implement functionality to distribute the replicated traffic across a number of probes/tools in a load balanced manner. In this way, each individual probe/tool 108 can be tasked to handle a subset (rather than the entirety) of the replicated traffic. Existing packet brokers typically implement this load balancing functionality by calculating a hash value for each replicated packet based on a 5-tuple of packet header fields comprising <source IP address, source port, destination IP address, destination port, protocol identifier and then forwarding the packet to the probe/tool associated with the calculated hash value.


Unfortunately, while load balancing based on the foregoing 5-tuple works well for transport protocols such as TCP or UDP where traffic is always transmitted along a single path between endpoints (i.e., between a single source IP and single destination IP), it is less suitable for transport protocols such as SCTP (Stream Control Transmission Protocol) where traffic can be transmitted along one of multiple paths between endpoints (known as multi-homing). This is because multi-homing protocols support automatic failover of traffic from one path to another in response to a failure, which in the case of 5-tuple based load balancing will cause the packets for a given communication session to be hashed, and thus forwarded, to a different probe/tool after the failover than before the failover. This switch in the destination probe/tool is undesirable since all of the traffic for a single communication session (e.g., in the case of mobile network, a single mobile user session) should ideally go to the same probe/tool in order to facilitate state-based analyses.


Further, even in non-multi-homing deployments, hashing based on the 5-tuple of <source IP address, source port, destination IP address, destination port, protocol identifier> necessarily causes a given probe/tool to receive all of the traffic between the two endpoints identified in the tuple. If the volume of traffic between those two endpoints is particularly large, the probe/tool may become overloaded. Accordingly, it would be desirable to have a mechanism for performing load balancing within a visibility network that is more intelligent than simple 5-tuple hashing.


SUMMARY

Techniques for enabling offline, intelligent load balancing of Stream Control Transmission Protocol (SCTP) traffic are provided. According to one embodiment, a load balancer can receive one or more SCTP packets that have been replicated from a network being monitored. The load balancer can further recover an SCTP message from tire one or more SCTP packets and can map the SCTP message to an egress port based on one or more parameters decoded from the SCTP message and one or more rules. The load balancer can then transmit the SCTP message out of the egress port towards an analytic probe or tool for analysis.


The following detailed description and accompanying drawings provide a better understanding of the nature and advantages of particular embodiments.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 depicts an example visibility network.



FIG. 2 depicts a visibility network comprising an SCTP load balancer according to an embodiment.



FIG. 3 depicts a diagram of two network elements that are connected via SCTP using multi-homing according to an embodiment.



FIG. 4 depicts the structure of an SCTP packet according to an embodiment.



FIGS. 5 and 6 depict a flowchart and a packet flow for performing packet modifying SCTP load balancing according to an embodiment.



FIGS. 7 and 8 depict a flowchart and a packet flow for performing packet preserving SCTP load balancing according to an embodiment.



FIG. 9 depicts an example network switch/router according to an embodiment.



FIG. 10 depicts an example computer system according to an embodiment.





DETAILED DESCRIPTION

In the following description, for purposes of explanation, numerous examples and details are set forth in order to provide an understanding of various embodiments. It will be evident, however, to one skilled in the art that certain embodiments can be practiced without some of these details, or can be practiced with modifications or equivalents thereof.


1. Overview

Embodiments of the present disclosure provide techniques for performing offline, intelligent load balancing of traffic that is transmitted via a multi-homing transport protocol such as SCTP. The qualifier “offline” means that these load balancing techniques can be performed on replicated (rather than live) traffic, such as via a packet broker in a visibility network. The qualifier “intelligent” means that these techniques can perform load balancing in a more configurable and granular fashion than 5-tuple hashing (for example, at the granularity of SCTP messages), and thus can overcome the problems that arise when using 5-tuple based load balancing in. e.g., multi-homing deployments.


These and other aspects of the present disclosure are described in further detail in the sections that follow.


2. Visibility Network


FIG. 2 depicts an example visibility network 200 that may be used to implement the intelligent load balancing techniques of the present disclosure according to an embodiment. As shown, visibility network 200 includes a number of taps 202 that are deployed in a core network 204 and are configured to replicate traffic exchanged in network 204 to a packet broker 206. In FIG. 2, core network 204 is a mobile LTE network that comprises network elements specific to this type of network, such us an eNodeB 210, a mobility management entity (MME) 212, a serving gateway (SGW) 214, and a packet data network gateway (PGW) 216 which connects to an external packet data network such as the Internet. Further, in this particular example, laps 202 are configured to replicate and forward SCTP traffic that is exchanged on certain interfaces (e.g., SIMME, SGs, S6a, Gx, and Gy) of core network 204. However, it should be appreciated that core network 204 can be any other type of computer network known in the art, such as a mobile 3G network, a landline local area network (LAN) or wide area network (WAN), etc.


Upon receiving the replicated traffic via laps 202, packet broker 206 can perform various types of packet processing functions on the traffic (as configured/assigned by an operator of visibility network 200) and can forward the processed traffic to one or more analytic probes/tools 208 for analysis. In one embodiment, packet broker 206 can be implemented solely in hardware, such as in the form of a network switch or router that relies on ASIC or FPGA-based packet processors to execute its assigned packet processing functions based on rules that are programmed into hardware memory tables (e.g., CAM tables) resident on the packet processors and/or line cards of the device. In another embodiment, packet broker 206 can be implemented solely in software that runs on. e.g., one or more general purpose physical or virtual computer systems. In yet another embodiment, pocket broker 206 can be implemented using a combination of hardware and software, such as a combination of a hardware-based basic packet broker and a software-based “session director” cluster as described in co-owned U.S. patent application Ser. No. 13/205,889, entitled “Software-based Packet Broker.” the entire contents of which are incorporated herein by reference in its entirety for all purposes.


As noted in the Background section, in cases where the replicated traffic from core network 204 needs to be load balanced across multiple probes/tools 208, conventional packet brokers typically calculate a hash value for each replicated packet based on a 5-tuple of packet header fields comprising <source IP address, source port, destination IP address, destination port, protocol identifier and forward the packet to a probe/tool mapped to the calculated hash value. However, this approach is problematic for traffic that is transported over a multi-homing transport protocol such as SCTP, since the packets for a given communication session may be inadvertently re-routed to a different probe/tool after a network failure (due to the session traffic being failed over to an alternate path between the session endpoints). Further, since 5-tuple based load balancing sends all of the traffic between two endpoint IP addresses to the same designated probe/tool, if the volume of traffic between those IP addresses becomes abnormally high, the designated probe/tool can become overloaded.


To address these and other similar issues, packet broker 206 of FIG. 2 implements a novel SCTP load balancer 218. Depending on the configuration of packet broker 206, SCTP load balancer 218 can be implemented in software, hardware, or a combination thereof. Generally speaking, SCTP load balancer 218 can receive the SCTP traffic that is replicated from core network 204 (after it has been processed via the assigned functions of packet broker 206) and can distribute that traffic across probes/tools 208 in a manner that (1) is more granular/configurable that 5-tuple hashing, and (2) ensures all of the traffic for a single communication session is sent to the same probe/tool, even if SCTP causes an automatic failover from one multi-homing path to another. In these ways, SCTP load balancer 218 can eliminate or minimize the problems associated with simple 5-tuple based load balancing.


For example, in one set of embodiments, SCTP load balancer 218 can recover SCTP messages that are embedded in the SCTP packets replicated from core network 204 and can map the individual messages to particular egress ports (and thus, probes/tools) in a load balanced manner based on user-defined rules/criteria. SCTP load balancer 218 can then transmit the SCTP messages out of the mapped egress ports in the form of newly constructed SCTP packets. This approach is referred to herein as the “pocket modifying” approach and is detailed in section (4) below.


In an alternative set of embodiments, SCTP load balancer 218 can recover SCTP messages that are embedded in the replicated SCTP packets and can map the messages to particular egress ports in a load balanced manner based on user-defined rules/criteria as noted above; however, instead of transmitting the SCTP messages in the form of newly constructed SCTP packets, SCTP load balancer 218 can transmit the messages by forwarding intact copies of the original SCTP packets (i.e., the packets received at packet broker 206 via. e.g., taps 202). In a situation where an original SCTP packet includes two messages (or portions thereof) that are mapped to two different egress ports respectively, SCTP load balancer 218 can forward a copy of that packet out of each of the two egress ports. This approach is referred to herein as the “packet preserving” approach and is detailed in section (5) below.


It should be appreciated that FIG. 2 is illustrative and not intended to limit embodiments of the present disclosure. For example, the various entities shown in FIG. 2 may be arranged according to different configurations and/or include subcomponents or functions that are not specifically described. One of ordinary skill in the art will recognize other variations, modifications, and alternatives.


3. Stream Control Transmission Protocol (SCTP)

To provide further context for the load balancing techniques described herein, the following sub-sections present a brief discussion of SCTP and its properties.


3.1 Protocol Overview and Multi-Homing


SCTP is a transport layer (i.e., OSI Layer 4) protocol that is commonly used in mobile networks such as LTE network 204 shown in FIG. 2 for signaling messages between the network's various elements (e.g., eNodeB, MME, MSC, etc.). As mentioned previously, SCTP supports multi-homing, which allows multiple, redundant paths to be denned between the endpoints of an SCTP connection (known as an SCTP “association”). In this way, SCTP can provide a level of resiliency and reliability that is not possible using alternative transport protocols such TCP or UDP.


For example, FIG. 3 depicts an MME server and a HSS server that are connected via an SCTP association comprising two paths (a first through network #1 and a second through network #2). The first path is defined between a NIC2 on the MME server having an IP address A.B.C.h and a NIC1 on the HSS server having an IP address A.B.C.j. Further, the second path is defined between a NIC1 on the MME server having an IP addresses X.Y.Z.n and a NIC2 on the HSS server having an IP address X.Y.Z.m. In a situation where a failure (e.g., port or link failure) occurs on cither path, the protocol can detect the failure can automatically redirect traffic to the other path, thereby ensuring that traffic continues to (low between the servers.


3.2 Message-Oriented Multi-Streaming


SCTP is a message-oriented protocol, which means that it transmits a sequence of messages (rather than an unbroken sequence of bytes) between endpoints. Each message is composed of a sequence of smaller units known as chunks.


SCTP is also a multi-streaming protocol, which means that it can transmit, within a single SCTP association, several independent streams of messages/chunks in parallel. Error handling is implemented on a per-stream basis and thus a packet drop, CRC error, or checksum error on one stream will not affect the transfer of other streams, which eliminates unnecessary head-of-line blocking. In LTE networks, SCTP streams are commonly used to group together messages belonging to a range of mobile users (identified by, e.g., International Mobile Subscriber Identity (IMSI)), such as one stream for every X users.


To clarify the message-oriented multi-streaming nature of SCTP, FIG. 4 depicts the structure of a typical SCTP packet 400. As shown, SCTP packet 400 includes a SCTP header and a number of chunks 1-N. Each chunk includes a stream ID which uniquely identifies the stream to which the chunk belongs, a transmission sequence number (TSN) which uniquely identifies the ordering of the chunk relative to other chunks in the same and other SCTP packets transmitted via this association, and a stream sequence number (SSN) which identifies the message within the stream to which the chunk belongs. With this information, a receiver endpoint of an SCTP association can reconstruct the streams and constituent messages sent by the sender endpoint of the association.


It should be noted that SCTP supports two types of chunks-data chunks and control chunks. Data chunks carry a message payload while control chunks are used for creating/tearing down an SCTP association, transmitting acknowledgements between endpoints, und testing reachability. To preserve message boundaries, each chunk includes a “B” (begin) hit and an “E” (end) bit; these hits indicate whether the chunk is the first chunk of a message or the last chunk of a message respectively. If both bits are set. The chunk contains the entirety of a single message.


It should also be noted that a SCTP packet may contain chunks (and thus messages) belonging to different streams. There is no requirement that a given SCTP packet comprise data solely for a single stream.


4. Packet Modifying SCTP Load Balancing

With the foregoing discussion of SCTP in mind. FIGS. 5 and 6 depict a flowchart 500 and a packet flow 600 respectively that may be carried out by SCTP load balancer 218 of FIG. 2 for performing offline, intelligent SCTP load balancing in a “packet modifying” manner according to an embodiment. These figures are described in combination below. With this packet modifying approach, SCTP load balancer 218 can recover the messages contained in the SCTP packets replicated from core network 204 and effectively repackage these messages into new SCTP packets that are sent to probes/tools 208. In this way, SCTP load balancer 218 can load balance the replicated SCTP traffic on a per-message basis towards probes/tools 208.


Starting with block 502 of flowchart 500, a packet receive module 602 of SCTP load balancer 218 (shown in packet flow 600) can receive an SCTP packet that has been replicated from core network 204 and can determine the SCTP association on which the packet was sent. Packet receive module 602 can make this determination based on. e.g., the SCTP header in the packet.


Assuming that the determined SCTP association is X, packet receive module 602 can check whether a packet queue 604(X), a packet reorder queue 606(X), and a data chunk queue 608(X) exists for association X within SCTP load balancer 218 (block 504). If not, packet receive module 602 can cause a new instance of each of these queues to be created for association X (block 506).


Upon creating queues 604(X)-608(X) (or verifying that they already exist), packet receive module 602 can further check whether the TSN of the first chunk in the SCTP pocket has continuity with (i.e., directly follows from) the last TSN received/processed by SCTP load balancer 218 for association X (block 508). If not, this means that the current SCTP packet has been received “out-of-order,” and thus packet receive module 602 can place the SCTP packet in packet reorder queue 606(X) so that it may be processed at a later point in time once the intervening packets for association X have arrived and have been processed (block 510). Flowchart 500 can then return to block 502.


However, if the TSN of the first chunk of the SCTP docs have continuity with the last TSN for association X, packet receive module 602 can enqueue the SCTP packet to packet queue 604(X) (block 512), enqueue five data chunks in the SCTP packet, in TSN order, to data chunk queue 608(X) (block 514), and pass any control chunks in the SCTP packets to a control chunk processor 610 for handling (block 516). Although not explicitly shown in flowchart 500, control chunk processor 610 can handle certain types of control chunks as indicated below:

    • INIT or INIT_ACK control chunk: processor 610 extracts endpoint IP address, starling TSN in each uplink/downlink flow, max number of streams that can be supported on each uplink/downlink Dow and associates this information with association X in an association table 612
    • SACK control chunk: processor 610 lends signal to stream processor 614 to consume packets with TSNs acknowledged by SACK chunk
    • Shutdown, shutdown ACK, or shutdown complete control chunk: processor 610 removes entry for association X from association table 612
    • Abort or error control chunk: processor 610 handles errors in connection


Note that, as part of enqueuing data chunks to data chunk queue 608 at block 514, packet receive module 602 can read from association table 612 to retrieve a pointer to queue 608.


At block 518, stream processor 614 can receive a signal from control chunk processor 610 indicating how many received SCTP packets it can consume (as noted above with respect to the SACK control chunk). In response, stream processor 614 can dequeue the data chunks for those packets from data chunk queue(s) 608, de-multiplex the data chunks based on each chunk's SID, and enqueue the de-multiplexed data chunks into a number of stream queues 616 (block 520). In this way, stream processor 614 can separate out the data chunks on a per-stream basis. As part of this process, stream processor 614 can cause the data chunks that have been dequeued from data chunk queue(s) 608 to be deleted from the corresponding packet queue(s) 604.


Further, as part of adding data chunks to stream queues 616, stream processor 614 can check the message boundaries defined the data chunks. Upon encountering the presence of the “E” bit for a given sequence of data chunks indicating a complete SCTP message, stream processor 614 can trigger a message decoder module 618 (block 522).


In response to being triggered, message decoder module 618 can parse the SCTP message using a protocol parser that is appropriate for the message (e.g., Diameter, S6a, S1AP, etc.) and can extract information from the message that is relevant to the state of the communication session to which the message belongs (block 524). In the case of a mobile user session, this information can include, e.g., user (IMSI) details, user equipment (IMEI) details, and more. Message decoder module 618 can store this state information in a state/session table 620 in order to track the states of ongoing communication sessions.


Then, at block 526, message decoder module 618 can consult an egress configuration table 622 for user-defined load balancing rules/criteria that indicate how the current message should be distributed to the egress ports of packet broker 206 (and thus, to probes/tools 208) in view or the information determined/extracted at block 524. Note that these rules/criteria are completely user-configurable and can correspond to various types of load balancing algorithms such as IMSI-based round robin, message level round robin, etc. The end result of block 526 is that message decoder module 618 can determine a specific mapping between the message and one or more specific egress ports of packet broker 206.


At block 528, message decoder module 618 can pass the message and the mapped egress port(s) to an SCTP transmit module 624. SCTP transmit module 624 can maintain pre-opened SCTP associations between packet broker 206 and each probe/tool 208. Finally, at block 530, SCTP transmit module 624 can package the message into a new SCTP packet and transmit lire newly created SCTP packet with the message out of the egress port towards a probe/tool 208.


It should be appreciated that flowchart 500 and packet flow 600 of FIGS. 5 and 6 are illustrative and various modifications are possible. For example, although the preceding description suggests that each data chunk queue 608 maintains on actual copy of the data chunks for packets added to the packet queue 604, in some embodiments each data chunk queue 608 may simply comprise pointers to the packet queue (which holds the actual data chunk data). This approach can reduce the memory footprint of the solution.


Further, while message decoder module 618 can track protocol state information and use this information to direct the load balancing process as noted above, in some embodiments module 618 may not do so. Instead, message decoder module 618 may simply decode SCTP parameters (e.g., message boundaries, stream IDs, etc.) and apply these SCTP parameters for load balancing purposes. One of ordinary skill in the art will recognize other variations, modifications, and alternatives.


5. Packet Preserving SCTP Load Balancing

As noted previously, as an alternative to the packet modifying approach of section (4). SCTP load balancer can implement a “packet preserving” approach for performing intelligent load balancing of SCTP traffic. A flowchart 700 and a packet flow 800 for this packet preserving approach are presented in FIGS. 7 and 8 respectively and described below. At a high level, the packet preserving approach is similar to the packet modifying approach hut is designed to forward load balanced SCTP messages to probes/tools 208 in the form of the original SCTP packets received at packet broker 206 (rather than in the form of newly created SCTP pockets). Thus, with this approach, there is no need to open SCTP associations between packet broker 206 and probes/tools 208.


Blocks 702-726 of flowchart 700 are generally similar to blocks 502-526 of flowchart 500, with the caveat that stream processor 614 docs not delete packets from the packet queues upon enqueuing data chunks to the stream queues. Instead, stream processor 614 can insert a reference counter into each packet in packet queue 604 indicating the number of complete messages included in that packet.


At block 728, a packet transmit module 626 that is shown in FIG. 8 (rather than SCTP transmit module 624) can receive the message and mapped egress port(s) from message decoder module 618. Then, at block 730, packet transmit module 626 can retrieve the original SCTP packet(s) corresponding to the message from packet queue(s) 604. Finally, at block 732, packet transmit module 626 can forward the original SCTP packet(s) as-is out of the mapped egress port(s) towards one or more probes/tools 208. Note that if a given packet contains multiple messages that are mapped to different egress ports, packet transmit module 626 can forward the packet multiple times (one for each different egress port).


6. Example Network Device


FIG. 9 depicts an example network device (e.g., switch/router) 900 according to an embodiment. Network switch/router 900 can be used to implement packet broker 206/SCTP load balancer 218 (or a portion thereof) according to an embodiment.


As shown, network switch/router 900 includes a management module 902, a switch fabric module 904, and a number of line cards 906(1)-906(N). Management module 902 includes one or more management CPUs 908 for managing/controlling the operation of the device. Each management CPU 908 can be a general purpose processor, such as a PowerPC, Intel, AMD, or ARM-based processor, that operates under the control of software stored in an associated memory (not shown).


Switch fabric module 904 and line cards 906(1)-906(N) collectively represent the data, or forwarding, plane of network switch/router 900. Switch fabric module 904 is configured to interconnect the various other modules of network switch/router 900. Each line card 906(1)-906(N) can include one or more ingress/egress ports 910(1)-910(N) that are used by network switch/router 900 to send and receive packets. Each line card 906(1)-906(N) can also include a packet processor 912(1)-912(N). Packet processor 912(1)-912(N) is a hardware processing component (e.g., an FPGA or ASIC) that can make wire speed decisions on how to handle incoming or outgoing traffic.


It should be appreciated that network switch/router 900 is illustrative and not intended to limit embodiments of the present invention. Many other configurations having more or fewer components than switch/router 900 are possible.


7. Example Computer System


FIG. 10 depicts an example computer system 1000 according to an embodiment. Computer system 900 can be used to implement packet broker 206/SCTP load balancer 218 (or a portion thereof) according to an embodiment.


As shown in FIG. 10, computer system 1000 can include one or more general purpose processors (e.g., CPUs) 1002 that communicate with a number of peripheral devices via a bus subsystem 1004. These peripheral devices can include a storage subsystem 1006 (comprising a memory subsystem 1008 and a file storage subsystem 1010), user interface input devices 1012, user interface output devices 1014, and a network interface subsystem 1016.


Bus subsystem 1004 can provide a mechanism for letting the various components and subsystems of computer system 1000 communicate with each other as intended. Although bus subsystem 1004 is shown schematically as a single bus, alternative embodiments of the bus subsystem can utilize multiple buses.


Network interface subsystem 1016 can serve as an interface for communicating data between computer system 1000 and other computing devices or networks. Embodiments of network interface subsystem 1016 can include wired (e.g., coaxial, twisted pair, or fiber optic Ethernet) and/or wireless (e.g., Wi-Fi, cellular, Bluetooth, etc.) interfaces.


User interface input devices 1012 can include a keyboard, pointing devices (e.g., mouse, trackball, touchpad, etc.), a scanner, a barcode scanner, a touch-screen incorporated into a display, audio input devices (e.g., voice recognition systems, microphones, etc.), and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and mechanisms for inputting information into computer system 1000.


User interface output devices 1014 can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices, etc. The display subsystem can be a cathode ray lube (CRT), a flat-panel device such as a liquid crystal display (LCD), or a projection device. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system 1000.


Storage subsystem 1006 can include a memory subsystem 1008 and a file/disk storage subsystem 1010. Subsystems 1008 and 1010 represent non-transitory computer-readable storage media that can store program code and/or data that provide the functionality of various embodiments described herein.


Memory subsystem 1008 can include a number of memories including a main random access memory (RAM) 1018 for storage of instructions and data during program execution and a read-only memory (ROM) 1020 in which fixed instructions are stored. File storage subsystem 1010 can provide persistent (i.e., nonvolatile) storage for program and data files and can include a magnetic or solid-state hard disk drive, an optical drive along with associated removable media (e.g., CD-ROM, DVD, Blu-Ray, etc.), a removable (lash memory-based drive or card, and/or other types of storage media known in the art.


It should be appreciated that computer system 1000 is illustrative and not intended to limit embodiments of the present disclosure. Many other configurations having more or fewer components than computer system 1000 are possible.


The above description illustrates various embodiments of the present disclosure along with examples of how aspects of the present disclosure may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present disclosure as defined by the following claims. For example, although certain embodiments have been described with respect to particular workflows and steps, it should be apparent to those skilled in the art that the scope of the present disclosure is not strictly limited to the described workflows and steps. Steps described as sequential may be executed in parallel, order of steps may be varied, and steps may be modified, combined, added, or omitted. As another example, although certain embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are possible, and that specific operations described as being implemented in software can also be implemented in hardware and vice versa.


The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. Other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the invention as set forth in the following claims.

Claims
  • 1. A method comprising: receiving a Stream Control Transmission Protocol (SCTP) packet that has been replicated from a network being monitored, wherein the SCTP packet comprises a plurality of data chunks including a first data chunk and another data chunk, wherein the first data chunk includes a first transmission sequence number, wherein the first data chunk carries a payload for an SCTP message, wherein the SCTP message is mapped to a first egress port, and wherein the another data chunk carries another payload for another SCTP message, wherein the another SCTP message is mapped to a second egress port;determining an SCTP association for the SCTP packet, wherein the SCTP association comprises a first endpoint and a second endpoint of an SCTP connection and wherein the first transmission sequence number identifies an ordering of the first data chunk relative to other data chunks within the SCTP association;selecting a packet queue and a data chunk queue based on the SCTP association;placing the SCTP packet into the packet queue;placing the first data chunk into the data chunk queue based on the first transmission sequence number;detecting a defined message boundary in the first data chunk of the plurality of data chunks, wherein the defined message boundary indicates a boundary of the SCTP message;triggering a message decoder module based on detecting the defined message boundary;determining, by the message decoder module, a mapping of the SCTP message to the first egress port and a second mapping of the another SCTP message to the second egress port; andtransmitting, based on the mapping, a first copy of the SCTP packet via the first egress port and, based on the second mapping, a second copy of the SCTP packet via the second egress port.
  • 2. The method of claim 1, wherein the message decoder module is triggered based on the boundary of the SCTP message indicating that the first data chunk is a last chunk of the SCTP message.
  • 3. The method of claim 1, further comprising: selecting a packet reorder queue based on the SCTP association;determining that the SCTP packet is received out of order based on the first transmission sequence number in an SCTP header of the SCTP packet; andplacing the SCTP packet in the packet reorder queue based on the determining.
  • 4. The method of claim 3, wherein the SCTP association is determined based on information in the SCTP header of the SCTP packet.
  • 5. The method of claim 1, wherein the first data chunk further includes a stream identifier and a stream sequence number, wherein the stream identifier identifies a stream to which the first data chunk belongs and wherein the stream sequence number identifies a message within the stream to which the first data chunk belongs.
  • 6. The method of claim 1, wherein the SCTP packet further includes a control chunk, the method further comprising: passing the control chunk to a control chunk processor;receiving, from the control chunk processor, a signal for processing the SCTP packet, wherein the signal is generated based on a type of the control chunk; andprocessing the plurality of data chunks of the SCTP packet based on the signal.
  • 7. The method of claim 1, wherein the plurality of data chunks comprises a second data chunk, wherein the second data chunk includes a second transmission sequence number and wherein the second data chunk carries a second payload for the SCTP message.
  • 8. The method of claim 7, wherein the another data chunk includes a third transmission sequence number.
  • 9. The method of claim 8, wherein the SCTP message is associated with a first stream and the another SCTP message is associated with a second stream.
  • 10. The method of claim 9, further comprising: placing the first data chunk and the second data chunk into a first stream queue, wherein the first stream queue is associated with the first stream; andplacing the another data chunk into a second stream queue, wherein the second stream queue is associated with the second stream.
  • 11. The method of claim 1 wherein the transmitting comprises: retrieving, from the packet queue, the first copy of the SCTP packet that contains the SCTP message.
  • 12. The method of claim 1, wherein another SCTP packet has been replicated from the network being monitored, the method further comprising: receiving the another SCTP packet, wherein the SCTP packet and the another SCTP packet originate from different SCTP associations in the network being monitored.
  • 13. The method of claim 1, wherein mapping the SCTP message comprises: extracting session information from the SCTP message; anddetecting, based on the session information, a load-balancing rule that maps the SCTP message to the first egress port.
  • 14. The method of claim 1, wherein the defined message boundary comprises at least one of a begin bit and an end bit, wherein the begin bit indicates that the first data chunk is a first chunk of the SCTP message and the end bit indicates that the first data chunk is a last chunk of the SCTP message.
  • 15. A non-transitory computer readable storage medium having stored thereon program code that when executed by a load balancer processor cause the load balancer processor to perform operations comprising: receiving a Stream Control Transmission Protocol (SCTP) packet that has been replicated from a network being monitored, wherein the SCTP packet comprises a plurality of data chunks including a first data chunk and another data chunk, wherein the first data chunk includes a first transmission sequence number and wherein the first data chunk carries a payload for an SCTP message, wherein the SCTP message is mapped to a first egress port, and wherein the another data chunk carries another payload for a second SCTP message, wherein the second SCTP message is mapped to a second egress port;determining an SCTP association for the SCTP packet, wherein the SCTP association comprises a first endpoint and a second endpoint of an SCTP connection and wherein the first transmission sequence number identifies an ordering of the first data chunk relative to other data chunks within the SCTP association;selecting a packet queue and a data chunk queue based on the SCTP association;placing the SCTP packet into the packet queue;placing the first data chunk into the data chunk queue based on the first transmission sequence number;detecting a defined message boundary in the first data chunk of the plurality of data chunks, wherein the defined message boundary indicates a boundary of the SCTP message;triggering a message decoder module based on detecting the defined message boundary;determining, by the message decoder module, a mapping of the SCTP message to the first egress port and a second mapping of the another SCTP message to the second egress port; andtransmitting, based on the mapping, a first copy of the SCTP packet via the first egress port and, based on the second mapping, a second copy of the SCTP packet via the second egress port.
  • 16. The non-transitory computer readable storage medium of claim 15, wherein the SCTP packet further includes a control chunk, the operations further comprising: passing the control chunk to a control chunk processor;receiving, from the control chunk processor, a signal for processing the SCTP packet, wherein the signal is generated based on a type of the control chunk; andprocessing the plurality of data chunks of the SCTP packet based on the signal.
  • 17. The non-transitory computer readable storage medium of claim 15, wherein the plurality of data chunks comprises a second data chunk, wherein the second data chunk includes a second transmission sequence number and wherein the second data chunk carries a second payload for the SCTP message.
  • 18. A load balancing system comprising: a processor; anda non-transitory computer readable medium having stored thereon program code that, when executed by the processor, causes the processor to: receive a Stream Control Transmission Protocol (SCTP) packet that has been replicated from a network being monitored, wherein the SCTP packet comprises a plurality of data chunks including a first data chunk and another data chunk, wherein the first data chunk includes a first transmission sequence number and wherein the first data chunk carries a payload for an SCTP message, wherein the SCTP message is mapped to a first egress port, and wherein the another data chunk carries another payload for a second SCTP message, wherein the second SCTP message is mapped to a second egress port;determine an SCTP association for the SCTP packet, wherein the SCTP association comprises a first endpoint and a second endpoint of an SCTP connection and wherein the first transmission sequence number identifies an ordering of the first data chunk relative to other data chunks within the SCTP association;select a packet queue and a data chunk queue based on the SCTP association;place the SCTP packet into the packet queue;place the first data chunk into the data chunk queue based on the first transmission sequence number;detect a defined message boundary in the first data chunk of the plurality of data chunks, wherein the defined message boundary indicates a boundary of the SCTP message;trigger a message decoder module based on detecting the defined message boundary;determine, by the message decoder module, a mapping of the SCTP message to the first egress port and a second mapping of the another SCTP message to the second egress port; andtransmit, based on the mapping, a first copy of the SCTP packet via the first egress port and, based on the second mapping, a second copy of the SCTP packet via the second egress port.
  • 19. The load balancing system of claim 18, wherein the SCTP packet further includes a control chunk, the processor further configured to: pass the control chunk to a control chunk processor;receive, from the control chunk processor, a signal for processing the SCTP packet, wherein the signal is generated based on a type of the control chunk; andprocess the plurality of data chunks of the SCTP packet based on the signal.
  • 20. The load balancing system of claim 18, wherein the plurality of data chunks comprises a second data chunk, wherein the second data chunk includes a second transmission sequence number and wherein the second data chunk carries a second payload for the SCTP message.
Priority Claims (1)
Number Date Country Kind
201641010295 Mar 2016 IN national
US Referenced Citations (342)
Number Name Date Kind
5031094 Toegel et al. Jul 1991 A
5359593 Derby et al. Oct 1994 A
5948061 Merriman et al. Sep 1999 A
5951634 Sitbon et al. Sep 1999 A
6006269 Phaal Dec 1999 A
6006333 Nielsen Dec 1999 A
6078956 Bryant et al. Jun 2000 A
6092178 Jindal et al. Jul 2000 A
6112239 Kenner et al. Aug 2000 A
6115752 Chauhan Sep 2000 A
6128279 O'Neil et al. Oct 2000 A
6128642 Doraswamy et al. Oct 2000 A
6148410 Baskey et al. Nov 2000 A
6167445 Gai et al. Dec 2000 A
6167446 Lister et al. Dec 2000 A
6182139 Brendel Jan 2001 B1
6195691 Brown Feb 2001 B1
6205477 Johnson et al. Mar 2001 B1
6233604 Van Horne et al. May 2001 B1
6260070 Shab Jul 2001 B1
6286039 Van Horne et al. Sep 2001 B1
6286047 Ramanathan et al. Sep 2001 B1
6304913 Rune Oct 2001 B1
6324580 Jindal et al. Nov 2001 B1
6327622 Jindal et al. Dec 2001 B1
6336137 Lee et al. Jan 2002 B1
6381627 Kwan et al. Apr 2002 B1
6389462 Cohen et al. May 2002 B1
6427170 Sitaraman et al. Jul 2002 B1
6434118 Kirschenbaum Aug 2002 B1
6438652 Jordan et al. Aug 2002 B1
6446121 Shah et al. Sep 2002 B1
6449657 Stanbach, Jr. et al. Sep 2002 B2
6470389 Chung et al. Oct 2002 B1
6473802 Masters Oct 2002 B2
6480508 Mwikalo et al. Nov 2002 B1
6490624 Sampson et al. Dec 2002 B1
6549944 Weinberg et al. Apr 2003 B1
6567377 Vepa et al. May 2003 B1
6578066 Logan et al. Jun 2003 B1
6606643 Emens et al. Aug 2003 B1
6665702 Zisapel et al. Dec 2003 B1
6671275 Wong et al. Dec 2003 B1
6681232 Sitanizadeh et al. Jan 2004 B1
6681323 Fontsnesi et al. Jan 2004 B1
6691165 Bruck et al. Feb 2004 B1
6697368 Chang et al. Feb 2004 B2
6735218 Chang et al. May 2004 B2
6745241 French et al. Jun 2004 B1
6751616 Chan Jun 2004 B1
6754706 Swildens et al. Jun 2004 B1
6772211 Lu et al. Aug 2004 B2
6779017 Lamberton et al. Aug 2004 B1
6789125 Aviani et al. Sep 2004 B1
6821891 Chen et al. Nov 2004 B2
6826198 Turina et al. Nov 2004 B2
6831891 Mansharamani et al. Dec 2004 B2
6839700 Doyle et al. Jan 2005 B2
6850984 Kalkunte et al. Feb 2005 B1
6874152 Vermeire et al. Mar 2005 B2
6879995 Chinta et al. Apr 2005 B1
6898633 Lyndersay et al. May 2005 B1
6901072 Wong May 2005 B1
6901081 Ludwig May 2005 B1
6920498 Gourlay et al. Jul 2005 B1
6928485 Krishnamurthy et al. Aug 2005 B1
6944678 Lu et al. Sep 2005 B2
6963914 Breitbart et al. Nov 2005 B1
6963917 Callis et al. Nov 2005 B1
6985956 Luke et al. Jan 2006 B2
6987763 Rochberger et al. Jan 2006 B2
6996615 McGuire Feb 2006 B1
6996616 Leighton et al. Feb 2006 B1
7000007 Valenti Feb 2006 B1
7009086 Brown et al. Mar 2006 B2
7009968 Ambe et al. Mar 2006 B2
7020698 Andrews et al. Mar 2006 B2
7020714 Kalyanaraman et al. Mar 2006 B2
7028083 Levine et al. Apr 2006 B2
7031304 Arberg et al. Apr 2006 B1
7032010 Swildens et al. Apr 2006 B1
7036039 Holland Apr 2006 B2
7058706 Iyer et al. Jun 2006 B1
7058717 Chao et al. Jun 2006 B2
7062642 Langrind et al. Jun 2006 B1
7086061 Joshi et al. Aug 2006 B1
7089293 Grosner et al. Aug 2006 B2
7095738 Desanti Aug 2006 B1
7117530 Lin Oct 2006 B1
7126910 Sridhar Oct 2006 B1
7127713 Davis et al. Oct 2006 B2
7136932 Schneider Nov 2006 B1
7139242 Bays Nov 2006 B2
7177933 Foth Feb 2007 B2
7177943 Temoshenko et al. Feb 2007 B1
7185052 Day Feb 2007 B2
7187687 Davis et al. Mar 2007 B1
7188189 Karol et al. Mar 2007 B2
7197547 Miller et al. Mar 2007 B1
7206806 Pineau Apr 2007 B2
7215637 Ferguson et al. May 2007 B1
7225272 Kelley et al. May 2007 B2
7240015 Karmouch et al. Jul 2007 B1
7240100 Wein et al. Jul 2007 B1
7254626 Kommula et al. Aug 2007 B1
7257642 Bridger et al. Aug 2007 B1
7260645 Bays Aug 2007 B2
7266117 Davis Sep 2007 B1
7266120 Cheng et al. Sep 2007 B2
7277954 Stewart et al. Oct 2007 B1
7292573 LaVigne et al. Nov 2007 B2
7296088 Padmanabhan et al. Nov 2007 B1
7321926 Zhang et al. Jan 2008 B1
7424018 Gallatin et al. Sep 2008 B2
7436832 Gallatin et al. Oct 2008 B2
7440467 Gallatin et al. Oct 2008 B2
7441045 Skene et al. Oct 2008 B2
7450527 Ashwood Smith Nov 2008 B2
7454500 Hsu et al. Nov 2008 B1
7483374 Nilakantan et al. Jan 2009 B2
7492713 Turner et al. Feb 2009 B1
7506065 LaVigne et al. Mar 2009 B2
7539134 Bowes May 2009 B1
7555562 See et al. Jun 2009 B2
7558195 Kuo et al. Jul 2009 B1
7574508 Kommula Aug 2009 B1
7581009 Hsu et al. Aug 2009 B1
7584301 Joshi Sep 2009 B1
7587487 Gunturu Sep 2009 B1
7606203 Shabtay et al. Oct 2009 B1
7647427 Devarapalli Jan 2010 B1
7657629 Kommula Feb 2010 B1
7690040 Frattura et al. Mar 2010 B2
7706363 Daniel et al. Apr 2010 B1
7716370 Devarapalli May 2010 B1
7720066 Weyman et al. May 2010 B2
7720076 Dobbins et al. May 2010 B2
7746789 Katoh et al. Jun 2010 B2
7747737 Apte et al. Jun 2010 B1
7756965 Joshi Jul 2010 B2
7774833 Szeto et al. Aug 2010 B1
7787454 Won et al. Aug 2010 B1
7792047 Gallatin et al. Sep 2010 B2
7835348 Kasralikar Nov 2010 B2
7835358 Gallatin et al. Nov 2010 B2
7840678 Joshi Nov 2010 B2
7848326 Leong et al. Dec 2010 B1
7889748 Leong et al. Feb 2011 B1
7899899 Joshi Mar 2011 B2
7940766 Olakangil et al. May 2011 B2
7953089 Ramakrishnan et al. May 2011 B1
8018943 Pleshek et al. Sep 2011 B1
8208494 Leong Jun 2012 B2
8238344 Chen et al. Aug 2012 B1
8239960 Frattura et al. Aug 2012 B2
8248928 Wang et al. Aug 2012 B1
8270845 Cheung et al. Sep 2012 B2
8315256 Leong et al. Nov 2012 B2
8386846 Cheung Feb 2013 B2
8391286 Gallatin et al. Mar 2013 B2
8504721 Hsu et al. Aug 2013 B2
8514718 Zjist Aug 2013 B2
8537697 Leong et al. Sep 2013 B2
8570862 Leong et al. Oct 2013 B1
8615008 Natarajan et al. Dec 2013 B2
8654651 Leong et al. Feb 2014 B2
8824466 Won et al. Sep 2014 B2
8830819 Leong et al. Sep 2014 B2
8873557 Nguyen Oct 2014 B2
8891527 Wang Nov 2014 B2
8897138 Yu et al. Nov 2014 B2
8953458 Leong et al. Feb 2015 B2
9155075 Song et al. Oct 2015 B2
9264446 Goldfarb et al. Feb 2016 B2
9270566 Wang et al. Feb 2016 B2
9270592 Sites Feb 2016 B1
9294367 Natarajan et al. Mar 2016 B2
9356866 Sivaramakrishnan et al. May 2016 B1
9380002 Johansson et al. Jun 2016 B2
9479415 Natarajan et al. Oct 2016 B2
9565138 Chen et al. Feb 2017 B2
9648542 Hsu et al. May 2017 B2
20010049741 Skene et al. Dec 2001 A1
20010052016 Skene et al. Dec 2001 A1
20020009081 Sampath et al. Jan 2002 A1
20020018796 Wironen Feb 2002 A1
20020023089 Woo Feb 2002 A1
20020026551 Kamimaki et al. Feb 2002 A1
20020038360 Andrews et al. Mar 2002 A1
20020055939 Nardone et al. May 2002 A1
20020059170 Vange May 2002 A1
20020059464 Hata et al. May 2002 A1
20020062372 Hong et al. May 2002 A1
20020078233 Biliris et al. Jun 2002 A1
20020091840 Pulier et al. Jul 2002 A1
20020112036 Bohannan et al. Aug 2002 A1
20020120743 Shabtay et al. Aug 2002 A1
20020124096 Loguinov et al. Sep 2002 A1
20020133601 Kennamer et al. Sep 2002 A1
20020150048 Ha et al. Oct 2002 A1
20020154600 Ido et al. Oct 2002 A1
20020188862 Trethewey et al. Dec 2002 A1
20020194324 Guba Dec 2002 A1
20020194335 Maynard Dec 2002 A1
20030023744 Sadot et al. Jan 2003 A1
20030031185 Kikuchi et al. Feb 2003 A1
20030035430 Islam et al. Feb 2003 A1
20030065711 Acharya et al. Apr 2003 A1
20030065763 Swildens et al. Apr 2003 A1
20030105797 Dolev et al. Jun 2003 A1
20030115283 Barbir et al. Jun 2003 A1
20030135509 Davis et al. Jul 2003 A1
20030202511 Sreejith et al. Oct 2003 A1
20030210686 Terrell et al. Nov 2003 A1
20030210694 Jayaraman et al. Nov 2003 A1
20030229697 Borella Dec 2003 A1
20040019680 Chao et al. Jan 2004 A1
20040024872 Kelley et al. Feb 2004 A1
20040032868 Oda et al. Feb 2004 A1
20040064577 Dahlin et al. Apr 2004 A1
20040194102 Neerdaels Sep 2004 A1
20040243718 Fujiyoshi Dec 2004 A1
20040249939 Amini et al. Dec 2004 A1
20040249971 Klinker Dec 2004 A1
20050021883 Sbishizuka et al. Jan 2005 A1
20050033858 Swildens et al. Feb 2005 A1
20050060418 Sorokopud Mar 2005 A1
20050060427 Phillips et al. Mar 2005 A1
20050086295 Cunningham et al. Apr 2005 A1
20050149531 Srivastava Jul 2005 A1
20050169180 Ludwig Aug 2005 A1
20050190695 Phaal Sep 2005 A1
20050207417 Ogawa et al. Sep 2005 A1
20050278565 Frattura et al. Dec 2005 A1
20050286416 Shimonishi et al. Dec 2005 A1
20060036743 Deng et al. Feb 2006 A1
20060039374 Belz et al. Feb 2006 A1
20060045082 Fertell et al. Mar 2006 A1
20060143300 See et al. Jun 2006 A1
20070044141 Lor et al. Feb 2007 A1
20070053296 Yazaki et al. Mar 2007 A1
20070171918 Ota et al. Mar 2007 A1
20070195761 Tatar et al. Aug 2007 A1
20070233891 Luby et al. Oct 2007 A1
20080002591 Ueno Jan 2008 A1
20080028077 Kamata et al. Jan 2008 A1
20080031141 Lean et al. Feb 2008 A1
20080089336 Mercier et al. Apr 2008 A1
20080137660 Olakangil et al. Jun 2008 A1
20080159141 Soukup et al. Jul 2008 A1
20080181119 Beyers Jul 2008 A1
20080195731 Harmel et al. Aug 2008 A1
20080225710 Raja et al. Sep 2008 A1
20080304423 Chuang et al. Dec 2008 A1
20090109933 Murasawa et al. Apr 2009 A1
20090135835 Gallatin et al. May 2009 A1
20090240644 Boettcher et al. Sep 2009 A1
20090245244 Coene Oct 2009 A1
20090262745 Leong et al. Oct 2009 A1
20100011126 Hsu et al. Jan 2010 A1
20100135323 Leong Jun 2010 A1
20100209047 Cheung et al. Aug 2010 A1
20100228974 Watts et al. Sep 2010 A1
20100293296 Hsu et al. Nov 2010 A1
20100325178 Won et al. Dec 2010 A1
20110044349 Gallatin et al. Feb 2011 A1
20110058566 Leong et al. Mar 2011 A1
20110211443 Leong et al. Sep 2011 A1
20110216771 Gallatin et al. Sep 2011 A1
20110283016 Uchida Nov 2011 A1
20120023340 Cheung Jan 2012 A1
20120033556 Kruglick Feb 2012 A1
20120069737 Vikberg et al. Mar 2012 A1
20120103518 Kakimoto et al. May 2012 A1
20120157088 Gerber et al. Jun 2012 A1
20120201137 Le Faucheur et al. Aug 2012 A1
20120243533 Leong Sep 2012 A1
20120257635 Gallatin et al. Oct 2012 A1
20120275311 Ivershen Nov 2012 A1
20130010613 Cafarelli et al. Jan 2013 A1
20130028072 Addanki Jan 2013 A1
20130034107 Leong et al. Feb 2013 A1
20130156029 Gallatin et al. Jun 2013 A1
20130173784 Wang et al. Jul 2013 A1
20130201984 Wang Aug 2013 A1
20130259037 Natarajan et al. Oct 2013 A1
20130272135 Leong Oct 2013 A1
20130281098 Fujii Oct 2013 A1
20130339540 Sheer Dec 2013 A1
20140003333 Ivershen et al. Jan 2014 A1
20140016500 Leong et al. Jan 2014 A1
20140022916 Natarajan et al. Jan 2014 A1
20140029451 Nguyen Jan 2014 A1
20140040478 Hsu et al. Feb 2014 A1
20140101297 Neisinger et al. Apr 2014 A1
20140161120 Lkaheimo Jun 2014 A1
20140204747 Yu et al. Jul 2014 A1
20140219100 Pandey et al. Aug 2014 A1
20140233399 Mann et al. Aug 2014 A1
20140321278 Cafarelli et al. Oct 2014 A1
20150009828 Murakami Jan 2015 A1
20150009830 Bisht et al. Jan 2015 A1
20150016306 Masini et al. Jan 2015 A1
20150033169 Lection et al. Jan 2015 A1
20150071171 Akiyoshi Mar 2015 A1
20150103824 Tanabe Apr 2015 A1
20150142935 Srinivas et al. May 2015 A1
20150170920 Purayath et al. Jun 2015 A1
20150180802 Chen et al. Jun 2015 A1
20150195192 Vasseur et al. Jul 2015 A1
20150207905 Merchant et al. Jul 2015 A1
20150215841 Hsu et al. Jul 2015 A1
20150256436 Stoyanov et al. Sep 2015 A1
20150263889 Newton Sep 2015 A1
20150281125 Koponen et al. Oct 2015 A1
20150319070 Nachum Nov 2015 A1
20150372840 Benny et al. Dec 2015 A1
20160119234 Valencia Lopez et al. Apr 2016 A1
20160149811 Roch et al. May 2016 A1
20160164768 Natarajan et al. Jun 2016 A1
20160182329 Armolavicius et al. Jun 2016 A1
20160182369 Vasudevan Jun 2016 A1
20160182378 Basavaraja et al. Jun 2016 A1
20160204996 Lindgren et al. Jul 2016 A1
20160248655 Francisco et al. Aug 2016 A1
20160255021 Renfrew Sep 2016 A1
20160285735 Chen et al. Sep 2016 A1
20160285762 Chen et al. Sep 2016 A1
20160285763 Laxman et al. Sep 2016 A1
20160308766 Register et al. Oct 2016 A1
20160373303 Vedam et al. Dec 2016 A1
20160373304 Sharma et al. Dec 2016 A1
20160373351 Sharma et al. Dec 2016 A1
20160373352 Sharma et al. Dec 2016 A1
20160380861 Ali Dec 2016 A1
20170026405 Vengalil Jan 2017 A1
20170187649 Chen et al. Jun 2017 A1
20170237632 Hegde et al. Aug 2017 A1
20170237633 Hegde et al. Aug 2017 A1
20170237838 Vandevoorde et al. Aug 2017 A1
20170331665 Porfiri Nov 2017 A1
20180367651 Li Dec 2018 A1
Foreign Referenced Citations (10)
Number Date Country
101677292 Mar 2010 CN
2654340 Oct 2013 EP
3206344 Aug 2017 EP
3206345 Aug 2017 EP
20070438 Feb 2008 IE
201641016960 May 2016 IN
201641035761 Oct 2016 IN
WO 2010135474 Nov 2010 WO
WO 2015116538 Aug 2015 WO
WO 2015138513 Sep 2015 WO
Non-Patent Literature Citations (141)
Entry
Armando L. Caro Jr. etc., SCTP: A Proposed Standard for Robust Internet Data Transport, 2003, IEEE Computer Society, 0018-9162/03 (Year: 2003).
R. Stewart, Ed, RFC4960, Stream Control Transmission Protocol (Year: 2007).
Indian Provisional Patent Application entitled: “System and Method for Offiine Load Balancing of SCTP Protocol Traffic”; Appln. No. 201641010295 filed Mar. 24, 2016; 13 pages.
7433 GTP Session Controller, www.ixia.com, downloaded circa Apr. 12, 2015, pp. 1-3.
Stateful GTP Correlation, https://www.gigamon.com/PDF/appnote/AN-GTP-Correlation-Stateful-Subscriber-Aware-Filtering-4025.pdf, date 2013, pp. 1-9.
Giga VUE-2404 // Data Sheet, www.gigamon.com, date Feb. 2014, pp. 1-6.
NGenius Performance Manager, www.netscout.com, date Mar. 2014, pp. 1-8.
Giga VUE-VM // Data Sheet, www.gigamon.com, date Oct. 2014, pp. 1-3.
Unified Visibility Fabric an Innovative Approach, https://www.gigamon.com/unified-visibility-fabric, downloaded circa Mar. 30, 2015, pp. 1-4.
adaptiv.io and Apsalar Form Strategic Partnership to Provide Omni-channel Mobile Data Intelligence, http://www.businesswire.com/news/home/20150113005721/en/adaptiv.io-Apsalar-Form-Strategic-Partnership-Provide-Omni-channel, Downloaded circa Mar. 30, 2015, pp. 1-2.
Real-time Data Analytics with IBM InfoSphere Streams and Brocade MLXe Series Devices, www.brocade.com, date 2011, pp. 1-2.
Syniverse Proactive Roaming Data Analysis—VisProactive, http://m.syniverse.com/files/service_solutions/pdf/solutionsheet_visproactive_314.pdf., date 2014, pp. 1-3.
Network Analytics: Product Overview, www.sandvine.com, date Apr. 28, 2014, pp. 1-2.
Krishnan et al.: “Mechanisms for Optimizing LAG/ECMP Component Link Utilization in Networks”, Oct. 7, 2014, 27 pages, https://tools.ietf.org/html/drafl-ietf-opsawg-large-flow-load-balancing-15.
U.S. Appl. No. 12/272,618, NonFinal Office Action mailed on Jan. 12, 2015, 5 pages.
U.S. Appl. No. 12/272,618, Notice of Allowance mailed on Aug. 26, 2015, 11 pages.
U.S. Appl. No. 12/272,618, Final Office Action mailed on Feb. 28, 2012, 12 pages.
U.S. Appl. No. 13/925,670, Non Final Office Action mailed on Nov. 16, 2015, 48 pages.
U.S. Appl. No. 14/230,590, Notice of Allowance mailed on Sep. 23, 2015, 8 pages.
U.S. Appl. No. 15/043,421, Notice of Allowance mailed on Jun. 27, 2016, 21 pages.
U.S. Appl. No. 14/603,304, Non Final Office Action mailed on Aug. 1, 2016, 86 pages.
U.S. Appl. No. 14/320,138, Notice of Allowance mailed on Sep. 23, 2016, 17 pages.
U.S. Appl. No. 14/603,304, Notice of Allowance mailed on Jan. 11, 2017, 13 pages.
U.S. Appl. No. 14/848,677, Non Final Office Action mailed on Feb. 10, 2017, 83 pages.
IBM User Guide, Version 2.1AIX, Solaris and Windows NT, Third Edition (Mar. 1999) 102 pages.
White Paper, Foundry Networks, “Server Load Balancing in Today's Web-Enabled Enterprises” Apr. 2002 10 pages.
Intemational Search Report & Written Opinion for PCT Application PCT/US2015/012915 mailed Apr. 10, 2015, 15 pages.
Gigamon: Vistapointe Technology Solution Brief; Visualize-Optimize-Monetize-3100-02; Feb. 2014; 2 pages.
Gigamon: Netflow Generation Feature Brief; 3099-04; Oct. 2014; 2 pages.
Gigamon: Unified Visibility Fabric Solution Brief; 3018-03; Jan. 2015; 4 pages.
Gigamon: Active Visibility for Multi-Tiered Security Solutions Overview; 3127-02; Oct. 2014; 5 pages.
Gigamon: Enabling Network Monitoring at 40Gbps and 100Gbps with Flow Mapping Technology White Paper; 2012; 4 pages.
Gigamon: Enterprise System Reference Architecture for the Visibility Fabric White Paper; 5005-03; Oct. 2014; 13 pages.
Gigamon: Gigamon Intelligent Flow Mapping White Paper; 3039-02; Aug. 2013; 7 pages.
Gigamon: Maintaining 3G and 4G LTE Quality of Service White Paper; 2012; 4 pages.
Gigamon: Monitoring, Managing, and Securing SDN Deployments White Paper; 3106-01; May 2014; 7 pages.
Gigamon: Service Provider System Reference Architecture for the Visibility Fabric White Paper; 5004-01; Mar. 2014; 11 pages.
Gigamon: Unified Visibility Fabric—A New Approach to Visibility White Paper; 3072-04; Jan. 2015; 6 pages.
Gigamon: The Visibility Fabric Architecture—A New Approach to Traffic Visibility White Paper; 2012-2013; 8 pages.
Ixia: Creating a Visibility Architecture—a New Perspective on Network Visibilty White Paper; 915-6581-01 Rev. A, Feb. 2014; 14 pages.
Gigamon: Unified Visibility Fabric; https:/lwww.gigamon.com/unfied-visibility-fabric; Apr. 7, 2015; 5 pages.
Gigamon: Application Note Stateful GTP Correlation; 4025-02; Dec. 2013; 9 pages.
Brocade and IBM Real-Time Network Analysis Solution; 2011 Brocade Communications Systems, Inc.; 2 pages.
Ixia Anue GTP Session Controller; Solution Brief; 915-6606-01 Rev. A, Sep. 2013; 2 pages.
Netscout; Comprehensive Core-to-Access IP Session Analysis for GPRS and UMTS Networks; Technical Brief; Jul. 16, 2010; 6 pages.
Netscout: nGenius Subscriber Intelligence; Data Sheet; SPDS_001-12; 2012; 6 pages.
Gigamon: Visibility Fabric Architecture Solution Brief; 2012-2013; 2 pages.
Gigamon: Visibility Fabric; More than Tap and Aggregation.bmp; 2014; 1 page.
Ntop: Monitoring Mobile Networks (2G, 3G and L TE) using nProbe; http://www.ntop.org/nprobe/monitoring-mobile-networks-2g-3g-and-lte-using-nprobe; Apr. 2, 2015; 4 pages.
Gigamon: GigaVUE-HBI Data Sheet; 4011-07; Oct. 2014; 4 pages.
Brocade IP Network Leadership Technology; Enabling Non-Stop Networking for Stackable Switches with Hitless Failover; 2010; 3 pages.
Gigamon: Adaptive Packet Filtering; Feature Brief; 3098-03 Apr. 2015; 3 pages.
Delgadillo, “Cisco Distributed Director”, White Paper, 1999, at URL:http://www-europe.cisco.warp/public/751/distdir/dd_wp.htm, (19 pages) with Table of Contents for TeleCon (16 pages).
Cisco LocalDirectorVersion 1.6.3 Release Notes, Oct. 1997, Cisco Systems, Inc. Doc No. 78-3880-05.
“Foundry Networks Announces Application Aware Layer 7 Switching on Serverlron Platform,” (Mar. 1999).
Foundry Serverlron Installation and Configuration Guide (May 2000), Table of Contents—Chapter 1-5, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html.
Foundry Serverlron Installation and Configuration Guide (May 2000), Chapter 6-10, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html.
Foundry Serverlron Installation and Configuration Guide (May 2000), Chapter 11—Appendix C, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html.
NGenius Subscriber Intelligence, http://www.netscout.com/uploads/2015/03NetScout_DS Subscriber_Intelligence_SP.pdf, downloaded circa Mar. 23, 2015, pp. 1-6.
Xu et al.: Cellular Data Network Infrastructure Characterization and Implication on Mobile Content Placement, Sigmetrics '11 Proceedings of the ACM Sigmetrics joint international conference on Measurement and modeling of computer systems, date Jun. 7-11, 2011, pp. 1-12, ISBN: 978-1-4503-0814-4 ACM New York, NY, USA copyright 2011.
E.H.T.B. Brands, Flow-Based Monitoring of GTP Trac in Cellular Networks, Date: Jul. 20, 2012, pp. 1-64, University of Twente, Enschede, The Netherlands.
Qosmos DeepFlow: Subscriber Analytics Use Case, http://www.qosmos.com/wp-contentluploads/2014/01/Qosmos-DeepFlow-Analytics-use-case-datasheet-Jan-2014.pdf, date Jan. 2014, pp. 1-2.
Configuring GTM to determine packet gateway health and availability, https://support.f5.com/kb/en-us/products/big-p_gtm/manuals/productlgtm-implementations-11-6-0/9.html, downloaded circa Mar. 23, 2015, pp. 1-5.
ExtraHop-Arista Persistent Monitoring Architecture for SDN, downloaded circa Apr. 12, 2015, pp. 1-5.
U.S. Appl. No. 61/919,244, filed Dec. 20, 2013 by Chen et al.
U.S. Appl. No. 61/932,650, filed Jan. 28, 2014 by Munshi et al.
U.S. Appl. No. 61/994,693, filed May 16, 2014 by Munshi et al.
U.S. Appl. No. 62/088,434, filed Dec. 5, 2014 by Hsu et al.
U.S. Appl. No. 62/137,073, filed Mar. 23, 2015 by Chen et al.
U.S. Appl. No. 62/137,084, filed Mar. 23, 2015 by Chen et al.
U.S. Appl. No. 62/137,096, filed Mar. 23, 2015 by Laxman et al.
U.S. Appl. No. 62/137,106, filed Mar. 23, 2015 by Laxman et al.
U.S. Appl. No. 60/998,410, filed Oct. 9, 2007 by Wang et al.
U.S. Appl. No. 60/169,502, filed Dec. 7, 2009 by Yeejang James Lin.
U.S. Appl. No. 60/182,812, filed Feb. 16, 2000 by Skene et al.
PCT Patent Application No. PCT/US2015/012915 filed on Jan. 26, 2015 by Hsu et al.
U.S. Appl. No. 14/320,138, filed Jun. 30, 2014 by Chen et al.
U.S. Appl. No. 14/603,304, filed Jan. 22, 2015 by Hsu et al.
U.S. Appl. No. 14/848,586, filed Sep. 9, 2015 by Chen et al.
U.S. Appl. No. 14/848,645, filed Sep. 9, 2015 by Chen et al.
U.S. Appl. No. 14/848,677, filed Sep. 9, 2015 by Laxman et al.
U.S. Appl. No. 09/459,815, filed Dec. 13, 1999 by Skene et al.
U.S. Appl. No. 14/927,478, filed Oct. 30, 2015 by Vedam et al.
U.S. Appl. No. 14/927,479, filed Oct. 30, 2015 by Sharma et al.
U.S. Appl. No. 14/927,482, filed Oct. 30, 2015 by Sharma et al.
U.S. Appl. No. 14/927,484, filed Oct. 30, 2015 by Sharma et al.
U.S. Appl. No. 15/205,889, filed Jul. 8, 2016 by Hegde et al.
U.S. Appl. No. 15/206,008, filed Jul. 8, 2016 by Hegde et al.
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Dec. 10, 2009, 15 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Jun. 2, 2010, 14 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Nov. 26, 2010, 16 pages.
Final Office Action for U.S. Appl. No. 11/827,524 mailed on May 6, 2011, 19 pages.
Advisory Action for U.S. Appl. No. 11/827,524 mailed on Jul. 14, 2011, 5 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Oct. 18, 2012, 24 pages.
Notice of Allowance for U.S. Appl. No. 11/827,524 mailed Jun. 25, 2013, 11 pages.
Non-Final Office Action for U.S. Appl. No. 14/030,782 mailed on Oct. 6, 2014, 14 pages.
Non-Final Office Action for U.S. Appl. No. 13/584,534 mailed on Oct. 24, 2014, 24 pages.
Restriction Requirement for U.S. Appl. No. 13/584,534 mailed on Jul. 21, 2014, 5 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Jul. 6, 2009, 28 pages.
Final Office Action for U.S. Appl. No. 11/937,285 mailed on Mar. 3, 2010, 28 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Aug. 17, 2010, 28 pages.
Final Office Action for U.S. Appl. No. 11/937,285 mailed on Jan. 20, 2011, 41 pages.
Final Office Action for U.S. Appl. No. 11/937,285 mailed on May 20, 2011, 37 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Nov. 28, 2011, 40 pages.
Notice of Allowance for U.S. Appl. No. 11/937,285 mailed on Jun. 5, 2012, 10 pages.
Final Office Action for U.S. Appl. No. 14/030,782 mailed on Jul. 29, 2015, 14 pages.
Final Office Action for U.S. Appl. No. 13/584,534 mailed on Jun. 25, 2015, 21 pages.
Notice of Allowance for U.S. Appl. No. 14/030,782 mailed on Nov. 16, 2015, 20 pages.
Notice of Allowance for U.S. Appl. No. 13/584,534 mailed on Dec. 16, 2015, 7 pages.
Notice of Allowance for U.S. Appl. No. 13/584,534 mailed on Jan. 6, 2016, 4 pages.
Non-Final Office Action for U.S. Appl. No. 14/320,138 mailed on Feb. 2, 2016, 30 pages.
Non-Final Office Action for U.S. Appl. No. 15/043,421 mailed on Apr. 13, 2016, 18 pages.
U.S. Appl. No. 12/272,618, Final Office Action mailed on May 5, 2014, 13 pages.
U.S. Appl. No. 12/272,618, NonFinal Office Action mailed on Jul. 29, 2013, 13 pages.
U.S. Appl. No. 15/466,732, filed Mar. 22, 2017 by Hegde et al.
U.S. Appl. No. 15/467,766, filed Mar. 23, 2017 by Nagaraj et al.
U.S. Appl. No. 15/425,777, filed Feb. 6, 2017, by Chen et al.
Joshi et al.: A Review of Network Traffic Analysis and Prediction Techniques; arxiv.org; 2015; 22 pages.
Anjali et al.: MABE: A New Method for Available Bandwidth Estimation in an MPLS Network; submitted to World Scientific on Jun. 5, 2002; 12 pages.
Cisco Nexus Data Broker: Scalable and Cost-Effective Solution for Network Traffic Visibility; Cisco 2015; 10 pages.
VB220-240G Modular 1 OG/1 G Network Packet Broker; VSS Monitoring; 2016, 3 pages.
Big Tap Monitoring Fabric 4.5; Big Switch Networks; Apr. 2015; 8 pages.
Gigamon Intelligent Flow Mapping—Whitepaper; 3039-04; Apr. 2015; 5 pages.
Ixia White Paper; The Real Secret to Securing Your Network; Oct. 2014; 16 pages.
Accedian—Solution Brief; FlowBroker; Feb. 2016; 9 pages.
Network Time Machine for Service Providers; Netscout; http://enterprise.netscout.com/telecom-tools/lte-solutions/network-time-machine-service-providers; Apr. 18, 2017; 8 pages.
Arista EOS Central—Introduction to TAP aggregation; https://eos.arista.com/introduction-to-tap-aggregation/; Apr. 188, 2017; 6 pages.
Brocade Session Director—Data Sheet; 2016; https:/lwww.brocade.com/contentldam/common/documents/content-types/datasheetlbrocade-session-director-ds.pdf; 5 pages.
Ixia—Evaluating Inline Security Fabric: Key Considerations; White Paper; https:/lwww.ixiacom.com/sites/default/files/2016-08/915-8079-01-S-WP-Evaluating%20Inline%20Security%20Fabric_v5.pdf; 10 pages.
Next-Generation Monitoring Fabrics for Mobile Networks; Big Switch Networks—White Paper; 2014; 9 pages.
Gigamon Adaptive Packet Filtering; Jan. 25, 2017; 3 pages.
VB220 Modular 10G.1G Network Packet Broker Datasheet; VSS Monitoring; 2016; 8 pages.
FlexaWare; FlexaMiner Packet Filter FM800PF; Jan. 27, 2017; 5 pages.
GL Communications Inc.; PacketBroker—Passive Ethernet Tap; Jan. 27, 2017; 2 pages.
International Search Report & Written Opinion for PCT Application PCT/US2017/025998 mailed Jul. 20, 2017, 8 pages.
Ixia & Vectra, Complete Visibility for a Stronger Advanced Persistent Threat (APT) Defense, pp. 1-2, May 30, 2016.
Extended European Search Report & Opinion for EP Application 17000212.5 dated Aug. 1, 2017, 9 pages.
Extended European Search Report & Opinion for EP Application 17000213.3 dated Aug. 1, 2017, 7 pages.
U.S. Appl. No. 14/927,484, NonFinal Office Action mailed on Aug. 9, 2017, 77 pages.
U.S. Appl. No. 14/848,677, Notice of Allowance mailed on Aug. 28, 2017, 31 pages.
Armando et al., SCTP: A Proposed Standard for Robust Internet Transport; Nov. 2003, IEEE, Computer (vol. 36), 2003, 8 pages.
Related Publications (1)
Number Date Country
20210328928 A1 Oct 2021 US
Continuations (1)
Number Date Country
Parent 15336333 Oct 2016 US
Child 17307365 US