Rule-based network traffic interception and distribution scheme

Information

  • Patent Grant
  • 9565138
  • Patent Number
    9,565,138
  • Date Filed
    Monday, June 30, 2014
    10 years ago
  • Date Issued
    Tuesday, February 7, 2017
    8 years ago
Abstract
Using a hash function, an L2/L3 switch can produce an FID for a data packet. The L2/L3 switch can select, from among potentially several stored VLAN flooding tables, a particular VLAN flooding table that is associated with a particular VLAN on which the data packet is to be carried. The rows of the particular VLAN flooding table can specify different combinations of the particular VLAN's egress ports. The L2/L3 switch can locate, in the particular VLAN flooding table, a particular row that specifies the FID. The L2/L3 switch can read, from the particular row, a specified subset of the egress ports that are associated with the particular VLAN. The L2/L3 switch can transmit copies of the data packet out each of the egress ports specified in the subset, toward analytic servers connected to those egress ports.
Description
BACKGROUND

The disclosure herein pertains generally to the field of computer networks. An operator of a telecommunication network can find it beneficial to analyze the traffic that flows through that network. Such analysis might be performed for a variety of different reasons. For example, the operator might want to obtain information that could be used as business intelligence. For another example, the operator might want to detect and pre-empt attacks being made through the network. In order to help prevent such attacks, the operator might want to analyze traffic to determine the sources from which different types of traffic originate.


Such traffic analysis can be performed at an analytic server that the operator maintains. Data packets flowing through the network can intercepted at network elements situated within the network between the data sources and the data destinations. These network elements can duplicate the data packets prior to forwarding those data packets on toward their ultimate destinations. The network elements can divert the duplicate packets to an analytic server. Due to the vast amount of traffic that flows through the network, the operator might maintain numerous separate analytic servers that are capable of analyzing different portions of the total traffic concurrently. The process of intercepting data packets, duplicating them, and forwarding the duplicates to analytic servers is called “telemetry.”



FIG. 1 is a block diagram that illustrates an example 100 of an L2/L3 switch that can receive data packets from various sources, duplicate those data packets, and forward the duplicates to various separate analytic servers. Data sources 102A-N can be communicatively coupled to Internet 104. Data sources 102A-N can address data packets to various specified destinations, typically identified by destination Internet Protocol (IP) addresses. Data sources 102A-N can then send these data packets through Internet 104. Network elements within Internet 104 can forward the data packets hop by hop toward their ultimate destinations.


An L2/L3 switch 108 can be communicatively coupled to (and potentially within) Internet 104. L2/L3 switch 108 can expose a set of ingress ports 106A-N and a set of egress ports 110A-N. Ingress ports 106A-N can communicatively couple L2/L3 switch 108 to various separate network elements within Internet 104. Ingress ports 106A-N can receive data packets that are travelling through Internet 104 on their way to their specified destinations. L2/L3 switch 108 can create duplicates of these arriving data packets.


For each original arriving data packet, L2/L3 switch 108 can look up a next hop for that data packet based on its specified destination. L2/L3 switch 108 can forward each original data packet on toward its next hop through one of the switch's egress ports (not necessarily any of egress ports 110A-N) that is connected to that next hop. In this manner, the original data packets eventually reach their specified destinations.


At least some of egress ports 110A-N can be communicatively coupled to separate analytic servers 112A-N. L2/L3 switch 108 can select one or more of analytic servers 112A-N to be responsible for analyzing the network traffic to which the duplicate data packet belongs. L2/L3 switch 108 can then forward the duplicate data packet out of one of egress ports 110A-N that is communicatively coupled to the one of analytic servers 112A-N that is responsible for analyzing that class of traffic.


Analytic servers 112A-N can receive duplicate data packets from L2/L3 switch 108. Analytic servers 112A-N can perform analysis relative to those packets. Analytic servers 112A-N can generate statistics and reports based on the analysis that they perform.


SUMMARY

A L2/L3 switch can carry outgoing traffic on multiple separate virtual local area networks (VLANs). Each such VLAN can be associated with a subset of the switch's ports. Some of these ports can be grouped together into trunks. Under certain scenarios, it is desirable to duplicate data packets received at the switch and to send those duplicates through each of a VLAN's trunks and untrunked ports. Techniques described herein enable data traffic to be load-balanced among the ports of each of the trunks in a VLAN so that those ports are less likely to become overloaded.


According to an implementation, an L2/L3 switch can duplicate an incoming data packet before forwarding the original packet on to its specified destination. The L2/L3 switch can classify the duplicate data packet using rules that are associated with an ingress trunk that includes the ingress port on which the original data packet was received. Based on this class, the L2/L3 switch can select, from among potentially several VLANs, a particular VLAN over which the duplicate data packet is to be carried.


The L2/L3 switch can input certain of the duplicate data packet's attributes into a hash function to produce a “forward identifier” or FID. The L2/L3 switch can select, from among potentially several stored VLAN flooding tables, a particular VLAN flooding table that is associated with the particular VLAN. The rows of the particular VLAN flooding table can specify different combinations of the particular VLAN's egress ports.


The L2/L3 switch can locate, in the particular VLAN flooding table, a particular row that specifies the FID. The L2/L3 switch can read, from the particular row, a specified subset of the egress ports that are associated with the particular VLAN. The subset can specify each of the particular VLAN's untrunked egress ports and also one egress port per trunk that is contained in the particular VLAN. The L2/L3 switch can transmit the duplicate data packet out each of the egress ports specified in the subset, toward analytic servers connected to those egress ports. The L2/L3 switch can optionally employ a VLAN flooding technique in order to send the duplicate data packet out through multiple egress ports.





BRIEF DESCRIPTION OF THE DRAWINGS


FIG. 1 is a block diagram that illustrates an example of an L2/L3 switch that can receive data packets from various sources, duplicate those data packets, classify those duplicates, and forward the duplicates to various separate analytic servers.



FIG. 2 is a block diagram that illustrates an example of an L2/L3 switch in which certain ports can be grouped together into trunks and in which certain ports can be associated with virtual local area networks (VLANs), according to an embodiment of the invention.



FIG. 3 is a block diagram that illustrates an example of an L2/L3 switch that includes line cards that are interconnected via switching fabric, according to an embodiment of the invention.



FIG. 4 is a block diagram that illustrates an example of an L2/L3 switch that stores VLAN flooding tables that indicate, for various different hash values, a set of egress ports through which duplicate data packets are to be sent, according to an embodiment of the invention.



FIG. 5 is a flow diagram that illustrates an example of a technique for load-balancing the transmission of duplicate data packets within each of a selected VLAN's trunks and untrunked ports, according to an embodiment of the invention.



FIG. 6 depicts a simplified block diagram of a network device that may incorporate an embodiment of the present invention.





DETAILED DESCRIPTION

Using a hash function, an L2/L3 switch can produce an FID for a data packet. The L2/L3 switch can select, from among potentially several stored VLAN flooding tables, a particular VLAN flooding table that is associated with a particular VLAN on which the data packet is to be carried. The rows of the particular VLAN flooding table can specify different combinations of the particular VLAN's egress ports.


The L2/L3 switch can locate, in the particular VLAN flooding table, a particular row that specifies the FID. The L2/L3 switch can read, from the particular row, a specified subset of the egress ports that are associated with the particular VLAN. The L2/L3 switch can transmit copies of the data packet out each of the egress ports specified in the subset, toward analytic servers connected to those egress ports.


Trunked Ports



FIG. 2 is a block diagram that illustrates an example 200 of an L2/L3 switch in which certain ports can be grouped together into trunks and in which certain ports can be associated with virtual local area networks (VLANs), according to an embodiment of the invention. Data sources 202A-N can be communicatively coupled to Internet 204. Ingress ports 206A-N of L2/L3 switch 208 can receive data packets that travel from data sources 202A-N through Internet 204. Certain ones of ingress ports 206A-N can be grouped together into trunks. For example, ingress ports 206A-C can be grouped together into trunk 214A, while ingress ports 204D-F can be grouped together into trunk 214B.


Data packets incoming to L2/L3 switch 208 can be classified into various classes or categories based on attributes that those data packets possess. Such classification can be performed based on classification rules. According to an implementation, a separate, potentially different set of classification rules can be associated with each separate ingress port of L2/L3 switch 208. In such an implementation, the classification rules that are associated with the ingress port on which a data packet arrives can be applied to that data packet.


According to an implementation, a separate, potentially different set of classification rules also can be associated with each separate trunk (e.g., trunks 214A and 214B) of L2/L3 switch 208. In such an implementation, the classification rules that are associated with the trunk that includes the ingress port on which a data packet arrives can be applied to that data packet. Customized classification rules can be programmed into L2/L3 switch 208 by its owner or operator. For example, a classification rule can map, to a particular class or category, data packets having a header-specified source IP address, destination IP address, source port, destination port, and/or transport layer protocol type (e.g., Transmission Control Protocol (TCP) or User Datagram Protocol (UDP)). Ingress ports 206A-N can be grouped into trunks as desired by the owner or operator of L2/L3 switch 208.


In an embodiment, after a data packet has been classified based on the rules that apply to that data packet, L2/L3 switch 208 can determine, based on the data packet's class or category, through which one of egress ports 212A-N the packet is to be forwarded. This determination may also be based on a VLAN to which that class or category is mapped, as is discussed in greater detail below.


In an implementation, egress ports 210A-N also can be grouped into trunks. As shown in FIG. 2, egress ports 210B, 210C, and 210D are grouped together into trunk 216A. Egress ports 210E and 210F are grouped together into trunk 216B. Egress ports 210I, 210J, and 210K are grouped together into trunk 216C. Egress ports 210L and 210N are grouped together into trunk 216D. Egress ports 210A and 210H are not a part of any trunk, but are “untrunked” ports.


Each of trunks 216A-D and untrunked ports 210A and 210H can be associated with a separate one of analytic servers 212A-N. For example, egress port 210A can forward duplicate data packets to analytic server 212A. Egress ports 210B, 210C, and 210D can, as members of trunk 216A, forward duplicate data packets to analytic server 212B. Egress ports 210E and 210F can, as members of trunk 216B, forward duplicate data packets to analytic server 212C. Egress port 210H can forward duplicate data packets to analytic server 212D. Egress ports 210I, 210J, and 210K can, as members of trunk 216C, forward duplicate data packets to analytic server 212E. Egress ports 210L and 210N can, as members of trunk 216D, forward duplicate data packets to analytic server 212N.


Class-To-VLAN Map


Egress ports that are grouped into trunks, as well as untrunked egress ports, can be associated with various virtual local area networks (VLANs). As shown in FIG. 2, egress ports 210A-F are associated with VLAN 218A, while egress ports 210H-N are associated with VLAN 218B. Although FIG. 2 shows egress ports 210A-N being associated with just one VLAN each, in various embodiments, one or more of egress ports 210A-N could be concurrently associated with multiple separate VLANs.


In an implementation, L2/L3 switch 208 can store a class-to-VLAN map 250 that indicates, for each class or category of data packets, which of the several VLANs should carry the duplicates of data packets belonging to that class or category out of L2/L3 switch 208. Thus, L2/L3 switch 208 can forward duplicate data packets belonging to a class associated with VLAN 218A out of one of the egress ports associated with VLAN 218A (e.g., egress ports 210A-F), while L2/L3 switch 208 can forward duplicate data packets belonging to a class associated with VLAN 218B out of one of the egress ports associated with VLAN 218B (e.g., egress ports 210H-N).


Thus, according to an embodiment, for each data packet arriving on any of ingress ports 206A-N, L2/L3 switch 208 can follow the rules associated with those ingress ports (and/or the trunks to which those ingress ports belong) to determine which of the VLANs will be carrying a duplicate of that data packet out of L2/L3 switch 208 to various ones of analytic servers 212A-N. In an alternative embodiment, a duplicate data packet can be carried out of L2/L3 switch 208 on multiple separate VLANs, each of which can be associated with the duplicate data packet's attributes.


Line Cards



FIG. 3 is a block diagram that illustrates an example 300 of an L2/L3 switch that includes line cards that are interconnected via switching fabric, according to an embodiment of the invention. L2/L3 switch 308 can be the same as L2/L3 switch 208 of FIG. 2, viewed at a different level of abstraction.


L2/L3 switch 308 can include line cards 320A-N. Each of line cards 320A-N can include a set of ports, a packet processor, and a content addressable memory (CAM). For example, line card 320A is shown having ports 326A. Line card 320B is shown having ports 326B. Line card 320C is shown having ports 326C. Line card 320N is shown having ports 326N. Ports 326A-N can behave as ingress ports or egress ports. Ports 326A-N can correspond to ingress ports 206A-N and egress ports 210A-N of FIG. 2, for example.


Each of line cards 320A-N can be connected to switching fabric 322. A management card 324 also can be connected to switching fabric 322. Management card 324 can program line cards 320A-N with instructions that govern the behavior of line cards 320A-N. Such instructions can specify the internal addressing behavior that line cards 320A-N are to follow when internally forwarding received data packets to others of line cards 320A-N.


In an embodiment, a data packet can be received at any port within ports 326A-N. The packet processor of the one of line cards 320A-N to which that port belongs can perform rule-based classification relative to the data packet based on the data packet's attributes. That packet processor also can create a duplicate of that data packet. Based on the original data packet's specified destination, the packet processor can perform a lookup in the receiving line card's CAM in order to determine a next hop for the original data packet.


Based on the duplicate data packet's class or category, the packet processor can perform a lookup in the CAM in order to determine a next hop for the duplicate data packet. In the case of a duplicate data packet, the next hop can be based on the VLAN that is associated with the data packet's class or category. The packet processor can internally address both the original data packet and its duplicate to others of line cards 320A-N that possess the ports that are associated with the next hops for those data packets.


The receiving line card can send the original data packet and its duplicate through switching fabric 322, which can use the internal addressing in order to route the data packets within L2/L3 switch 308 to the appropriate sending line cards within line cards 320A-N. These sending line cards can then forward the data packets through the appropriate ones of ports 326A-N toward their ultimate destinations. In the case of an original data packet, the ultimate destination may be a device possessing an Internet Protocol (IP) address matching the destination IP address specified in the original data packet's header. In the case of a duplicate data packet, the ultimate destination may be an analytic server that is connected to the port out of which the sending line card transmits the duplicate data packet.


VLAN Flooding Tables



FIG. 4 is a block diagram that illustrates an example 400 of an L2/L3 switch that stores VLAN flooding tables that indicate, for various different hash values, a set of egress ports through which duplicate data packets are to be sent, according to an embodiment of the invention. L2/L3 switch 408 can be the same as L2/L3 switch 208 of FIG. 2, viewed at a different level of abstraction.


As shown in FIG. 4, L2/L3 switch 408 can include ingress ports 406A-N, some of which can be grouped into trunks 414A and 414B. L2/L3 switch 408 can further include egress ports 410A-N, some of which can be grouped into trunks 416A-D. Egress ports 410A-F can be associated with VLAN 418A, while egress ports 410H-N can be associated with VLAN 418B. Various ones of egress ports 410A-N can be connected to various ones of analytic servers 412A-N.


In an embodiment, L2/L3 switch 408 can store multiple VLAN flooding tables 430A and 430B. L2/L3 switch 408 can store a separate VLAN flooding table for each VLAN over which L2/L3 switch 408 can carry duplicate data packets. In the illustrated example, L2/L3 switch 408 stores VLAN flooding table 430A for VLAN 418A. L2/L3 switch 408 additionally stores VLAN flooding table 430B for VLAN 418B. Each of line cards 320A-N of FIG. 3 can store a separate copy of each of VLAN flooding tables 430A and 430B. For example, each of line cards 320A-N can store a copy of these VLAN flooding tables in a CAM or other non-volatile memory.


Referring again to FIG. 4, upon receiving a data packet on one of ingress ports 410A-N, L2/L3 switch 408 (and, more specifically, the packet processor of the line card that includes the receiving ingress port) can classify that data packet using rule-based classification. L2/L3 switch 408 can use the data packet's class or category to select a VLAN on which a duplicate of the data packet will be carried. For example, based on the data packet's attributes, L2/L3 switch 408 might determine that the data packet's duplicate is to be carried on VLAN 418A. L2/L3 switch 408 can select, from its VLAN flooding tables (e.g., VLAN flooding tables 430A and 430B), the particular VLAN flooding table that is associated with the selected VLAN. Continuing the previous example, L2/L3 switch 408 can determine that VLAN 418A is associated with VLAN flooding table 430A. Consequently, using entries within the selected VLAN flooding table (in this example, VLAN flooding table 430A), L2/L3 switch 408 can determine through which of (potentially several of) egress ports 410A-N the duplicate data packet is to be sent.


Each of VLAN flooding tables 430A and 430B can contain a set of rows. As illustrated, VLAN flooding table 430A contains rows 432A-F, while VLAN flooding table 430B contains rows 434A-F. In an implementation, the quantity of rows in a particular VLAN table can be based on the quantities of egress ports in the various trunks included within the corresponding VLAN. More specifically, the quantity of rows can be equal to the least common multiple of the numbers of ports in each of the corresponding VLAN's trunks.


Thus, for example, in VLAN 418A, the number of ports in trunk 416A is 3, and the number of ports in trunk 416B is 2. The least common multiple of 2 and 3 is 6, so VLAN flooding table 430A contains 6 rows. Similarly, in VLAN 418B, the number of ports in trunk 416C is 3, and the number of ports in trunk 416D is 2. Again, the least common multiple of 2 and 3 is 6, so VLAN flooding table 430B also contains 6 rows.


In an implementation, each row of VLAN flooding tables 430A and 430B can include a quantity of columns that is equal to the sum of the number of trunks in the corresponding VLAN plus the number of untrunked ports in the corresponding VLAN plus one. For example, in VLAN 418A, there are 2 trunks (i.e., 416A and 416B) and 1 untrunked port (i.e., 410A), so the quantity of columns in each of rows 432A-F is 2+1+1, or 4 columns. Similarly, in VLAN 418B, there are also 2 trunks (i.e., 416C and 416D) and 1 untrunked port (i.e., 410H), so the quantity of columns in each of rows 434A-F is 2+1+1, or 4 columns.


In each VLAN flooding table row, the first column can contain an identifier called an FID (standing for “forward identifier”). This identifier can be produced by a hash function 436. In an implementation, when a data packet is received on an ingress port of a particular line card, the packet processor of that line card can invoke hash function 436 relative to a specified set of the data packet's attributes. Hash function 436 thereby produces a hash value, which will be found in the first column of one of the rows of the selected VLAN flooding table. Thus, rows 432A-F contain FIDs 1-6 in their first columns. Rows 434A-F also contain FIDs 1-6 in their first columns. The row containing the matching hash value is the row that is applicable to the incoming data packet.


In each VLAN flooding table row, the remaining columns can specify the set of egress ports through which the duplicate data packet is to be forwarded. In an implementation, VLAN flooding tables 430A and 430B can be used to load-balance duplicate data packets among subsets of egress ports 410A-N. VLAN flooding tables 430A and 430B can be populated in a manner such that a different subset of the egress ports associated with the corresponding VLANs are specified in each row. For each of VLAN flooding tables 430A and 430B, the rows of that table can collectively contain all of the possible combinations of single-port selections from each of the corresponding VLAN's trunks and untrunked ports. Upon locating the row that has the hash value (or FID) that matches the hash value produced by inputting the duplicate data packet's attributes into hash function 436, L2/L3 switch 408 can cause the duplicate data packet to be forwarded out of each of the egress ports specified in that row.


More specifically, in one implementation, the columns following the first column in each VLAN flooding table row collectively specify one egress port per trunk or untrunked port in the corresponding VLAN. Within a column corresponding to a particular trunk, the rows of the VLAN flooding table can rotate through the egress ports associated with that trunk to achieve a balanced distribution among that trunk's egress ports.


For example, in VLAN flooding table 430A, all of rows 432A-F specify egress port 410A in the second column (since egress port 410A is untrunked). Rows 432A-F rotate through successive ones of egress ports 410B-D in the third column (since egress ports 410B-D belong to the same trunk 416A). Rows 432A-F rotate through successive ones of egress ports 410E and 410F in the fourth column (since egress ports 410E and 410F belong to the same trunk 416B).


By way of operational example, if an incoming data packet's class is associated with VLAN 418B, and if that incoming data packet's attributes hash to FID3, then row 434C of VLAN flooding table 430B will be applicable to the incoming data packet. According to row 434C, duplicates of the incoming data packet are to be sent out through egress ports 410H, 410K, and 410L.


Load-Balancing within a VLAN's Trunks



FIG. 5 is a flow diagram that illustrates an example of a technique for load-balancing the transmission of duplicate data packets within each of a selected VLAN's trunks and untrunked ports, according to an embodiment of the invention. The technique can be performed by L2/L3 switch 408 of FIG. 4, for example.


Referring again to FIG. 5, in block 502, an L2/L3 switch receives an incoming data packet. In block 504, the L2/L3 switch creates a duplicate of the data packet. In block 506, the L2/L3 switch forwards the original data packet toward its specified destination.


In block 508, the L2/L3 switch applies rules to the duplicate data packet's attributes in order to classify the duplicate data packet. In block 510, the L2/L3 switch uses a class-to-VLAN map to determine which of several VLANs is mapped to the duplicate data packet's class. In block 512, the L2/L3 switch selects, from among several VLAN flooding tables, a particular VLAN flooding table that is associated with the VLAN determined in block 510.


In block 514, the L2/L3 switch inputs a set of the duplicate data packet's attributes into a hash function in order to produce an FID for the duplicate data packet. In block 516, the L2/L3 switch locates, in the particular VLAN flooding table selected in block 512, a particular row that specifies the FID.


In block 518, the L2/L3 switch reads, from the particular row located in block 516, a subset of the egress ports that are contained in the VLAN determined in block 510. In an embodiment, the subset includes all of the VLAN's untrunked ports (if any) as well as one egress port per trunk of the VLAN. In block 520, the L2/L3 switch causes the duplicate data packet to be transmitted out each of the egress ports in the subset read in block 518. In one implementation, the duplicate data packet can be transmitted through multiple egress ports using the mechanism of VLAN flooding, which is further described in U.S. Pat. No. 8,615,008, which is incorporated by reference herein. In one embodiment, the VLAN flooding technique involves disabling media access control (MAC) learning on one or more ports, thereby forcing the transmission of a packet through multiple ports associated with a VLAN. The duplicate data packet thus reaches each of the analytic servers that is connected to the VLAN determined in block 510.


Example Network Node


Various different systems and devices may incorporate an embodiment of the present invention. FIG. 6 provides an example of a network device that may incorporate an embodiment of the present invention. FIG. 6 depicts a simplified block diagram of a network device 600 that may incorporate an embodiment of the present invention (e.g., network device 600 may correspond to nodes depicted in figures above). In the embodiment depicted in FIG. 6, network device 600 comprises a plurality of ports 612 for receiving and forwarding data packets and multiple cards that are configured to perform processing to facilitate forwarding of the data packets to their intended destinations. The multiple cards may include one or more line cards 604 and a management card 602. In one embodiment, a card, sometimes also referred to as a blade or module, can be inserted into one of a plurality of slots on the chassis of network device 600. This modular design allows for flexible configurations with different combinations of cards in the various slots of the device according to differing network topologies and switching requirements. The components of network device 600 depicted in FIG. 6 are meant for illustrative purposes only and are not intended to limit the scope of the invention in any manner. Alternative embodiments may have more or less components than those shown in FIG. 6.


Ports 612 represent the I/O plane for network device 600. Network device 600 is configured to receive and forward packets using ports 612. A port within ports 612 may be classified as an input port or an output port depending upon whether network device 600 receives or transmits a data packet using the port. A port over which a packet is received by network device 600 is referred to as an input or ingress port. A port used for communicating or forwarding a packet from network device 600 is referred to as an output or egress port. A particular port may function both as an input/ingress port and an output/egress port. A port may be connected by a link or interface to a neighboring network device or network. Ports 612 may be capable of receiving and/or transmitting different types of traffic at different speeds including 1 Gigabit/sec, 6 Gigabits/sec, 60 Gigabits/sec, or even more. In some embodiments, multiple ports of network device 600 may be logically grouped into one or more trunks.


Upon receiving a data packet via an input port, network device 600 is configured to determine an output port of device 600 to be used for transmitting the data packet from network device 600 to facilitate communication of the packet to its intended destination. Within network device 600, the packet is forwarded from the input port to the determined output port and then transmitted from network device 600 using the output port. In one embodiment, forwarding of packets from an input port to an output port is performed by one or more line cards 604. Line cards 604 represent the data forwarding plane of network device 600. Each line card may comprise one or more packet processors that are programmed to perform forwarding of data packets from an input port to an output port. In one embodiment, processing performed by a line card may comprise extracting information from a received packet, performing lookups using the extracted information to determine an output port for the packet such that the packet can be forwarded to its intended destination, and to forward the packet to the output port. The extracted information may include, for example, the header of the received packet.


Management card 602 is configured to perform management and control functions for network device 600 and represents the management plane for network device 600. In one embodiment, management card 602 is communicatively coupled to line cards 604 via switch fabric 606. Management card 602 may comprise one or more physical processors 608, one or more of which may be multicore processors. These management card processors may be general purpose multicore microprocessors such as ones provided by Intel, AMD, ARM, Freescale Semiconductor, Inc., and the like, that operate under the control of software stored in associated memory 610. The processors may run one or more VMs. Resources allocated to these VMs may be dynamically changed. In some embodiments, multiple management cards may be provided for redundancy and to increase availability.


In some embodiments, one or more line cards 604 may each comprise one or more physical processors 614, some of which may be multicore. These processors may run one or more VMs. Resources allocated to these VMs may be dynamically changed.


The embodiment depicted in FIG. 6 depicts a chassis-based system. This however is not intended to be limiting. Certain embodiments of the present invention may also be embodied in non-chassis based network devices, which are sometimes referred to as “pizza boxes.” Such a network device may comprise a single physical multicore CPU or multiple physical multicore CPUs.


Various embodiments described above can be realized using any combination of dedicated components and/or programmable processors and/or other programmable devices. The various embodiments may be implemented only in hardware, or only in software, or using combinations thereof. For example, the software may be in the form of instructions, programs, etc. stored in a computer-readable memory and may be executed by one or more processing units, where the processing unit is a processor, a core of a processor, or a percentage of a core. In certain embodiments, the various processing described above, including the processing depicted in the flowcharts described above can be performed in software without needing changes to existing device hardware (e.g., router hardware), thereby increasing the economic viability of the solution. Since certain inventive embodiments can be implemented entirely in software, it allows for quick rollouts or turnarounds along with lesser capital investment, which further increases the economic viability and attractiveness of the solution.


The various processes described herein can be implemented on the same processor or different processors in any combination, with each processor having one or more cores. Accordingly, where components or modules are described as being adapted to or configured to perform a certain operation, such configuration can be accomplished, e.g., by designing electronic circuits to perform the operation, by programming programmable electronic circuits (such as microprocessors) to perform the operation, by providing software or code instructions that are executable by the component or module (e.g., one or more processors) to perform the operation, or any combination thereof. Processes can communicate using a variety of techniques including but not limited to conventional techniques for interprocess communication, and different pairs of processes may use different techniques, or the same pair of processes may use different techniques at different times. Further, while the embodiments described above may make reference to specific hardware and software components, those skilled in the art will appreciate that different combinations of hardware and/or software components may also be used and that particular operations described as being implemented in hardware might also be implemented in software or vice versa.


The various embodiments are not restricted to operation within certain specific data processing environments, but are free to operate within a plurality of data processing environments. Additionally, although embodiments have been described using a particular series of transactions, this is not intended to be limiting.


Thus, although specific invention embodiments have been described, these are not intended to be limiting. Various modifications and equivalents are within the scope of the following claims.

Claims
  • 1. A network device comprising: a plurality of egress ports;one or more processors; anda memory coupled with and readable by the one or more processors, the memory including instructions that, when executed by the one or more processors, cause at least one processor from the one or more processors to perform operations including:generating an identifier for a first data packet using a hash function and one or more attributes of the data packet;determining a first class for the first data packet based on a specified first set of attributes of the first data packet;determining a second class for a second data packet based on a specified second set of attributes of the second data packet;determining a first set of ports from the plurality of egress ports, wherein the first set of ports is determined using the identifier and a first table from a plurality of tables, wherein the first table is associated with a first VLAN from a plurality of VLANS, wherein the first VLAN is associated with the first class but not the second class;determining a second set of ports from the plurality of egress ports, wherein the second set of ports is determined using a second table from the plurality of tables, wherein the second table is associated with a second VLAN from the plurality of VLANS, wherein the second VLAN is associated with the second class but not the first class;sending a copy of the first data packet through each egress port from the first set of ports; andsending a copy of the second data packet through each egress port from the second set of ports.
  • 2. A network device, comprising: a plurality of egress ports;one or more processors; anda memory coupled with and readable by the one or more processors, the memory including instructions that, when executed by the one or more processors, cause at least one processor from the one or more processors to perform operations including:generating a first identifier by inputting attributes of a first data packet into a hash function;reading a first set of egress ports from a first table row from a table, wherein the first table row is associated with the first identifier;sending a copy of the first data packet to each egress port from the first set of egress ports;generating a second identifier for a second data packet by inputting attributes of the second data packet into the hash function;reading a second set of egress ports from a second table row from the table, wherein the second table row is associated with the second identifier but not the first; andsending a copy of the second data packet through each egress port in the second set of egress ports;wherein the second set of egress ports differs from the first set of egress ports; andwherein the second set of egress ports and the first set of egress ports both belong to a same VLAN.
  • 3. A non-transitory computer-readable storage medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform operations including: generating an identifier for a first data packet received by a network device, the network device including a plurality of egress ports, wherein the identifier is generated using a hash function and one or more attributes of the first data packet;determining a first class for the first data packet, wherein the first class is determined based on a specified first set of attributes of the first data packet;determining a second class for a second data packet received by the network device, wherein the second class is determined based on a specified second set of attribute of the second data packet;determining a first set of ports from the plurality of egress ports, wherein the first set of ports is determined using the identifier and a first table from a plurality of tables, wherein the first table is associated with a first VLAN from a plurality of VLANs, wherein the first VLAN is associated with the first class but not the second class;sending a copy of the first packet through each egress port from the first set of ports;determining a second set of ports from the plurality of egress ports using a second table from the plurality of tables, wherein the second table is associated with a second VLAN from the plurality of VLANs, wherein the second VLAN is associated with the second class but not the first class; andsending a copy of the second packet through each egress port from the second set of ports.
  • 4. A non-transitory computer-readable storage medium storing instructions which, when executed by one or more processors, cause the one or more processors to perform operations including: generating a first identifier by inputting attributes of a first data packet received by a network device into the hash function, the network device including a plurality of egress ports;reading a first set of egress ports from a first table row from a table, wherein the first table row is associated with the first identifier;sending a copy of the first data packet through each egress port in the first set of egress ports;generating a second identifier for a second data packet received by the network device by inputting attributes of the second data packet into the hash function;reading a second set of egress ports from a second table row from the table, wherein the second table row is associated with the second identifier but not the first identifier; andsending a copy of the second data packet through each egress port in the second set of egress ports;wherein the second set of egress ports differs from the first set of egress ports;wherein the second set of egress ports and the first set of egress ports both belong to a same VLAN.
  • 5. A method comprising: receiving a data packet on an ingress port of a network device, wherein the ingress port is associated with an ingress trunk, the network device including a plurality of egress ports;determining a category for the data packet by applying, to a first set of attributes of the data packet, a set of rules associated with the ingress trunk;selecting, from a plurality of VLANs, a particular VLAN that is mapped to the category;selecting, from a plurality of VLAN flooding tables, a particular VLAN flooding table that is mapped to the particular VLAN;determining an identifier by inputting a second set of attributes of the data packet into a hash function;locating, in the particular VLAN flooding table, a particular row associated with the identifier, wherein the particular row identifies a set of egress ports from the plurality of egress ports;generating a duplicate of the data packet; andforwarding the duplicate of the data packet through each of the egress ports in the set of egress ports identified by the particular row, wherein the forwarding includes using VLAN flooding;wherein the set of egress ports identified by the particular row includes a first combination of egress ports, the first combination including one egress port per egress trunk of a the particular VLAN;wherein a second row of the particular VLAN flooding table specifies a second combination of egress ports, the second combination including one egress port per egress trunk of the particular VLAN; andwherein, for each particular egress trunk from the plurality of trunks of the particular VLAN, the first combination specifies a first egress port of the particular egress trunk and the second combination specifies a second egress port of the particular egress trunk that is different from the first egress port of the particular egress trunk.
  • 6. A network device, comprising: a plurality of ports; andone or more processors;wherein the network device is configured to include a plurality of virtual local area networks (VLANs), wherein each VLAN from the plurality of VLANs is associated with one or more ports from the plurality of ports;and wherein the one or more processors are configured to:determine a class for a packet, wherein the class is determined using one or more first attributes of the packet;select a VLAN from the plurality of VLANs, wherein the selected VLAN is associated with the class;determine, using a hash function and one or more second attributes of the packet, an identifier for the packet;select, using the identifier, a set of ports from the one or more ports associated with the selected VLAN, wherein a number of the ports in the set of ports is less than a number of the one or more ports associated with the selected VLAN; andsend a copy of the packet through each port from the set of ports.
  • 7. The network device of claim 6, wherein the selected VLAN is associated with a table, and wherein the set of ports are selected using the table.
  • 8. The network device of claim 6, wherein the selected VLAN is associated with a table, and wherein the identifier is used to select a row from the table, the table row providing the set of ports.
  • 9. The network device of claim 6, wherein the selected VLAN includes one or more trunks, wherein each trunk from the one or more trunks is associated with at least port from the one or more ports associated with the VLAN, and wherein the set of ports includes at least one port from each of the one or more trunks.
  • 10. The network device of claim 6, wherein the selected VLAN includes one or more trunks, wherein each trunk from the one or more trunks is associated with at least port from the one or more ports associated with the VLAN, wherein the selected VLAN is associated with a table, wherein the identifier is used to select a row from the table, wherein the table row includes at least one port from each of the one or more trunks, and wherein the set of ports is provided by the table row.
  • 11. The network device of claim 6, wherein the class for the packet is determined using classification rules associated with an ingress port of the network device, wherein the packet is received at the ingress port.
  • 12. The network device of claim 6, wherein the class for the packet is determined using classification rules associated with an ingress trunk of the network device, wherein the ingress trunk includes an ingress port, and wherein the packet was received at the ingress port.
CROSS-REFERENCES TO RELATED APPLICATIONS; CLAIM OF PRIORITY

The present application claims priority under 35 U.S.C. §119(e) to U.S. Provisional Patent Application No. 61/919,244 filed Dec. 20, 2013, titled RULE BASED NETWORK TRAFFIC INTERCEPTION AND DISTRIBUTION SCHEME. The present application is related to U.S. Pat. No. 8,615,008 filed Jul. 11, 2007, titled DUPLICATING NETWORK TRAFFIC THROUGH TRANSPARENT VLAN FLOODING. The contents of both U.S. Provisional Patent Application No. 61/919,244 and U.S. Pat. No. 8,615,008 are incorporated by reference herein.

US Referenced Citations (246)
Number Name Date Kind
5031094 Toegel et al. Jul 1991 A
5359593 Derby et al. Oct 1994 A
5948061 Merriman et al. Sep 1999 A
5951634 Sitbon et al. Sep 1999 A
6006269 Phaal Dec 1999 A
6006333 Nielsen Dec 1999 A
6092178 Jindal et al. Jul 2000 A
6112239 Kenner et al. Aug 2000 A
6115752 Chauhan Sep 2000 A
6128279 O'Neil et al. Oct 2000 A
6128642 Doraswamy et al. Oct 2000 A
6148410 Baskey et al. Nov 2000 A
6167445 Gai et al. Dec 2000 A
6167446 Lister et al. Dec 2000 A
6182139 Brendel Jan 2001 B1
6195691 Brown Feb 2001 B1
6233604 Van Horne et al. May 2001 B1
6286039 Van Horne et al. Sep 2001 B1
6286047 Ramanathan et al. Sep 2001 B1
6304913 Rune Oct 2001 B1
6324580 Jindal et al. Nov 2001 B1
6327622 Jindal et al. Dec 2001 B1
6336137 Lee et al. Jan 2002 B1
6381627 Kwan et al. Apr 2002 B1
6389462 Cohen et al. May 2002 B1
6427170 Sitaraman et al. Jul 2002 B1
6434118 Kirschenbaum Aug 2002 B1
6438652 Jordan et al. Aug 2002 B1
6446121 Shah et al. Sep 2002 B1
6449657 Stanbach, Jr. et al. Sep 2002 B2
6470389 Chung et al. Oct 2002 B1
6473802 Masters Oct 2002 B2
6480508 Mwikalo et al. Nov 2002 B1
6490624 Sampson et al. Dec 2002 B1
6549944 Weinberg et al. Apr 2003 B1
6567377 Vepa et al. May 2003 B1
6578066 Logan et al. Jun 2003 B1
6606643 Emens et al. Aug 2003 B1
6665702 Zisapel et al. Dec 2003 B1
6671275 Wong et al. Dec 2003 B1
6681232 Sistanizadeh et al. Jan 2004 B1
6681323 Fontanesi et al. Jan 2004 B1
6691165 Bruck et al. Feb 2004 B1
6697368 Chang et al. Feb 2004 B2
6735218 Chang et al. May 2004 B2
6745241 French et al. Jun 2004 B1
6751616 Chan Jun 2004 B1
6772211 Lu et al. Aug 2004 B2
6779017 Lamberton et al. Aug 2004 B1
6789125 Aviani et al. Sep 2004 B1
6826198 Turina et al. Nov 2004 B2
6831891 Mansharamani et al. Dec 2004 B2
6839700 Doyle et al. Jan 2005 B2
6850984 Kalkunte et al. Feb 2005 B1
6874152 Vermeire et al. Mar 2005 B2
6879995 Chinta et al. Apr 2005 B1
6898633 Lyndersay et al. May 2005 B1
6901072 Wong May 2005 B1
6901081 Ludwig May 2005 B1
6928485 Krishnamurthy et al. Aug 2005 B1
6944678 Lu et al. Sep 2005 B2
6963914 Breitbart et al. Nov 2005 B1
6963917 Callis et al. Nov 2005 B1
6985956 Luke et al. Jan 2006 B2
6987763 Rochberger et al. Jan 2006 B2
6996615 McGuire Feb 2006 B1
6996616 Leighton et al. Feb 2006 B1
7000007 Valenti Feb 2006 B1
7009968 Ambe et al. Mar 2006 B2
7020698 Andrews et al. Mar 2006 B2
7020714 Kalyanaraman et al. Mar 2006 B2
7028083 Levine et al. Apr 2006 B2
7031304 Arberg et al. Apr 2006 B1
7032010 Swildens et al. Apr 2006 B1
7036039 Holland Apr 2006 B2
7058717 Chao et al. Jun 2006 B2
7062642 Langrind et al. Jun 2006 B1
7086061 Joshi et al. Aug 2006 B1
7089293 Grosner et al. Aug 2006 B2
7126910 Sridhar Oct 2006 B1
7127713 Davis et al. Oct 2006 B2
7136932 Schneider Nov 2006 B1
7139242 Bays Nov 2006 B2
7177933 Foth Feb 2007 B2
7185052 Day Feb 2007 B2
7187687 Davis et al. Mar 2007 B1
7188189 Karol et al. Mar 2007 B2
7197547 Miller et al. Mar 2007 B1
7206806 Pineau Apr 2007 B2
7215637 Ferguson et al. May 2007 B1
7225272 Kelley et al. May 2007 B2
7240015 Karmouch et al. Jul 2007 B1
7240100 Wein et al. Jul 2007 B1
7254626 Kommula et al. Aug 2007 B1
7257642 Bridger et al. Aug 2007 B1
7260645 Bays Aug 2007 B2
7266117 Davis Sep 2007 B1
7266120 Cheng et al. Sep 2007 B2
7277954 Stewart et al. Oct 2007 B1
7292573 LaVigne et al. Nov 2007 B2
7296088 Padmanabhan et al. Nov 2007 B1
7321926 Zhang et al. Jan 2008 B1
7424018 Gallatin et al. Sep 2008 B2
7436832 Gallatin et al. Oct 2008 B2
7440467 Gallatin et al. Oct 2008 B2
7450527 Ashwood Smith Nov 2008 B2
7454500 Hsu et al. Nov 2008 B1
7483374 Nilakantan et al. Jan 2009 B2
7492713 Turner Feb 2009 B1
7506065 LaVigne et al. Mar 2009 B2
7555562 See et al. Jun 2009 B2
7587487 Gunturu Sep 2009 B1
7606203 Shabtay et al. Oct 2009 B1
7690040 Frattura et al. Mar 2010 B2
7706363 Daniel et al. Apr 2010 B1
7720066 Weyman et al. May 2010 B2
7720076 Dobbins et al. May 2010 B2
7747737 Apte et al. Jun 2010 B1
7787454 Won et al. Aug 2010 B1
7792047 Gallatin et al. Sep 2010 B2
7835348 Kasralikar Nov 2010 B2
7835358 Gallatin et al. Nov 2010 B2
7848326 Leong et al. Dec 2010 B1
7889748 Leong et al. Feb 2011 B1
7940766 Olakangil et al. May 2011 B2
7953089 Ramakrishnan et al. May 2011 B1
8208494 Leong Jun 2012 B2
8238344 Chen et al. Aug 2012 B1
8239960 Frattura et al. Aug 2012 B2
8248928 Wang Aug 2012 B1
8270845 Cheung et al. Sep 2012 B2
8315256 Leong et al. Nov 2012 B2
8386846 Cheung Feb 2013 B2
8391286 Gallatin et al. Mar 2013 B2
8514718 Zijst Aug 2013 B2
8537697 Leong et al. Sep 2013 B2
8570862 Leong et al. Oct 2013 B1
8615008 Natarajan et al. Dec 2013 B2
8654651 Leong et al. Feb 2014 B2
8824466 Won et al. Sep 2014 B2
8830819 Leong et al. Sep 2014 B2
8873557 Nguyen Oct 2014 B2
8891527 Wang Nov 2014 B2
8897138 Yu et al. Nov 2014 B2
8953458 Leong et al. Feb 2015 B2
9155075 Song Oct 2015 B2
9264446 Goldfarb Feb 2016 B2
9294367 Natarajan Mar 2016 B2
9380002 Johansson Jun 2016 B2
20010049741 Skene et al. Dec 2001 A1
20010052016 Skene et al. Dec 2001 A1
20020018796 Wironen Feb 2002 A1
20020023089 Woo Feb 2002 A1
20020026551 Kamimaki et al. Feb 2002 A1
20020038360 Andrews et al. Mar 2002 A1
20020055939 Nardone et al. May 2002 A1
20020059170 Vange May 2002 A1
20020059464 Hata et al. May 2002 A1
20020062372 Hong et al. May 2002 A1
20020078233 Biliris et al. Jun 2002 A1
20020091840 Pulier et al. Jul 2002 A1
20020112036 Bohannon et al. Aug 2002 A1
20020120743 Shabtay et al. Aug 2002 A1
20020124096 Loguinov et al. Sep 2002 A1
20020133601 Kennamer et al. Sep 2002 A1
20020150048 Ha et al. Oct 2002 A1
20020154600 Ido et al. Oct 2002 A1
20020188862 Trethewey et al. Dec 2002 A1
20020194324 Guha Dec 2002 A1
20020194335 Maynard Dec 2002 A1
20030023744 Sadot Jan 2003 A1
20030031185 Kikuchi et al. Feb 2003 A1
20030035430 Islam et al. Feb 2003 A1
20030065711 Acharya et al. Apr 2003 A1
20030065763 Swildens et al. Apr 2003 A1
20030105797 Dolev et al. Jun 2003 A1
20030115283 Barbir et al. Jun 2003 A1
20030135509 Davis et al. Jul 2003 A1
20030202511 Sreejith et al. Oct 2003 A1
20030210686 Terrell et al. Nov 2003 A1
20030210694 Jayaraman et al. Nov 2003 A1
20030229697 Borella Dec 2003 A1
20040019680 Chao et al. Jan 2004 A1
20040024872 Kelley et al. Feb 2004 A1
20040032868 Oda Feb 2004 A1
20040064577 Dahlin et al. Apr 2004 A1
20040194102 Neerdaels Sep 2004 A1
20040243718 Fujiyoshi Dec 2004 A1
20040249939 Amini et al. Dec 2004 A1
20040249971 Klinker Dec 2004 A1
20050021883 Shishizuka et al. Jan 2005 A1
20050033858 Swildens et al. Feb 2005 A1
20050060418 Sorokopud Mar 2005 A1
20050060427 Phillips et al. Mar 2005 A1
20050086295 Cunningham et al. Apr 2005 A1
20050149531 Srivastava Jul 2005 A1
20050169180 Ludwig Aug 2005 A1
20050190695 Phaal Sep 2005 A1
20050207417 Ogawa et al. Sep 2005 A1
20050278565 Frattura et al. Dec 2005 A1
20050286416 Shimonishi et al. Dec 2005 A1
20060036743 Deng et al. Feb 2006 A1
20060039374 Belz et al. Feb 2006 A1
20060045082 Fertell et al. Mar 2006 A1
20060143300 See et al. Jun 2006 A1
20070053296 Yazaki Mar 2007 A1
20070195761 Tatar et al. Aug 2007 A1
20070233891 Luby et al. Oct 2007 A1
20080002591 Ueno Jan 2008 A1
20080031141 Lean et al. Feb 2008 A1
20080089336 Mercier Apr 2008 A1
20080137660 Olakangil Jun 2008 A1
20080159141 Soukup et al. Jul 2008 A1
20080181119 Beyers Jul 2008 A1
20080195731 Harmel et al. Aug 2008 A1
20080225710 Raja et al. Sep 2008 A1
20080304423 Chuang et al. Dec 2008 A1
20090135835 Gallatin et al. May 2009 A1
20090262745 Leong et al. Oct 2009 A1
20100135323 Leong Jun 2010 A1
20100209047 Cheung et al. Aug 2010 A1
20100325178 Won et al. Dec 2010 A1
20110044349 Gallatin et al. Feb 2011 A1
20110058566 Leong et al. Mar 2011 A1
20110211443 Leong et al. Sep 2011 A1
20110216771 Gallatin et al. Sep 2011 A1
20120023340 Cheung Jan 2012 A1
20120157088 Gerber et al. Jun 2012 A1
20120243533 Leong Sep 2012 A1
20120257635 Gallatin et al. Oct 2012 A1
20130010613 Cafarelli et al. Jan 2013 A1
20130034107 Leong et al. Feb 2013 A1
20130156029 Gallatin et al. Jun 2013 A1
20130173784 Wang et al. Jul 2013 A1
20130201984 Wang Aug 2013 A1
20130259037 Natarajan Oct 2013 A1
20130272135 Leong Oct 2013 A1
20140016500 Leong et al. Jan 2014 A1
20140022916 Natarajan et al. Jan 2014 A1
20140029451 Nguyen Jan 2014 A1
20140204747 Yu et al. Jul 2014 A1
20140321278 Cafarelli et al. Oct 2014 A1
20150033169 Lection et al. Jan 2015 A1
20150215841 Hsu et al. Jul 2015 A1
20160164768 Natarajan Jun 2016 A1
20160204996 Lindgren Jul 2016 A1
Foreign Referenced Citations (4)
Number Date Country
2654340 Oct 2013 EP
20070438 Feb 2008 IE
20070438 Feb 2008 IE
2010135474 Nov 2010 WO
Non-Patent Literature Citations (65)
Entry
U.S. Appl. No. 61/919,244, filed Dec. 20, 2013 by Chen et al.
U.S. Appl. No. 61/932,650, filed Jan. 28, 2014 by Munshi et al.
U.S. Appl. No. 61/994,693, filed May 16, 2014 by Munshi et al.
U.S. Appl. No. 62/088,434, filed Dec. 5, 2014 by Hsu et al.
U.S. Appl. No. 62/137,073, filed Mar. 23, 2015 by Chen et al.
U.S. Appl. No. 62/137,084, filed Mar. 23, 2015 by Chen et al.
U.S. Appl. No. 62/137,096, filed Mar. 23, 2015 by Laxman et al.
U.S. Appl. No. 62/137,106, filed Mar. 23, 2015 by Laxman et al.
U.S. Appl. No. 60/998,410, filed Oct. 9, 2007 by Wang et al.
PCT Patent Application No. PCT/US2015/012915 filed on Jan. 26, 2015 by Hsu et al.
U.S. Appl. No. 14/848,586, filed Sep. 9, 2015 by Chen et al.
U.S. Appl. No. 14/848,645, filed Sep. 9, 2015 by Chen et al.
U.S. Appl. No. 14/848,677, filed Sep. 9, 2015 by Chen et al.
Brocade and IBM Real-Time Network Analysis Solution; 2011 Brocade Communications Systems, Inc.; 2 pages.
Brocade IP Network Leadership Technology; Enabling Non-Stop Networking for Stackable Switches with Hitless Failover; 2010; 3 pages.
Gigamon Adaptive Packet Filtering; Feature Breif; 3098-03 Apr. 2015; 3 pages.
Gigamon: Active Visibility for Multi-Tiered Security Solutions Overview; 3127-02; Oct. 14; 5 pages.
Gigamon: Application Note Stateful GTP Correlation; 4025-02; Dec. 13; 9 pages.
Gigamon: Enabling Network Monitoring at 40Gbps and 100Gbps with Flow Mapping Technology White Paper; 2012; 4 pages.
Gigamon: Enterprise System Reference Architecture for the Visibility Fabric White Paper; 5005-03; Oct. 14; 13 pages.
Gigamon: Gigamon Intelligent Flow Mapping White Paper; 3039-02; Aug. 13; 7 pages.
Gigamon: GigaVUE-HB1 Data Sheet; 4011-07; Oct. 14; 4 pages.
Gigamon: Maintaining 3G and 4G LTE Quality of Service White Paper; 2012; 4 pages.
Gigamon: Monitoring, Managing, and Securing SDN Deployments White Paper; 3106-01; May 14; 7 pages.
Gigamon: Netflow Generation Feature Brief; 3099-04; Oct. 14; 2 pages.
Gigamon: Service Provider System Reference Architecture for the Visibility Fabric White Paper; 5004-01; Mar. 14; 11 pages.
Gigamon: The Visibility Fabric Architecture—A New Approach to Traffic Visibility White Paper; 2012-2013; 8 pages.
Gigamon: Unified Visibility Fabric—A New Approach to Visibility White Paper; 3072-04; Jan. 15; 6 pages.
Gigamon: Unified Visibility Fabric Solution Brief; 3018-03; Jan. 15; 4 pages.
Gigamon: Unified Visibility Fabric; https://www.gigamon.com/unfied-visibility-fabric; Apr. 7, 2015; 5 pages.
Gigamon: Visibility Fabric Architecture Solution Brief; 2012-2013; 2 pages.
Gigamon: Visibility Fabric; More than Tap and Aggregation.bmp; 2014; 1 page.
Gigamon: Vistapointe Technology Solution Brief; Visualize-Optimize-Monetize-3100-02; Feb. 14; 2 pages.
IBM User Guide, Version 2.1AIX, Solaris and Windows NT, Third Edition (Mar. 1999) 102 Pages.
International Search Report & Written Opinion for PCT Application PCT/US2015/012915 mailed Apr. 10, 2015, 15 pages.
Ixia Anue GTP Session Controller; Solution Brief; 915-6606-01 Rev. A, Sep. 2013; 2 pages.
Ixia: Creating a Visibility Architecture—a New Perspective on Network Visibilty White Paper; 915-6581-01 Rev. A, Feb. 2014; 14 pages.
Netscout: nGenius Subscriber Intelligence; Data Sheet; SPDS—001-12; 2012; 6 pages.
Netscout; Comprehensive Core-to-Access IP Session Analysis for GPRS and UMTS Networks; Technical Brief; Jul. 16, 2010; 6 pages.
ntop: Monitoring Mobile Networks (2G, 3G and LTE) using nProbe; http://www.ntop.org/nprobe/monitoring-mobile-networks-2g-3g-and-lte-using-nprobe; Apr. 2, 2015; 4 pages.
White Paper, Foundry Networks, “Server Load Balancing in Today's Web-Enabled Enterprises” Apr. 2002 10 Pages.
Final Office Action for U.S. Appl. No. 14/030,782 mailed on Jul. 29, 2015, 26 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Jul. 6, 2009, 28 pages.
Final Office Action for U.S. Appl. No. 11/937,285 mailed on Mar. 3, 2010, 28 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Aug. 17, 2010, 28 pages.
Final Office Action for U.S. Appl. No. 11/937,285 mailed on Jan. 20, 2011, 41 pages.
Final Office Action for U.S. Appl. No. 11/937,285 mailed on May 20, 2011, 37 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 mailed on Nov. 28, 2011, 40 pages.
Notice of Allowance for U.S. Appl. No. 11/937,285 mailed on Jun. 5, 2012, 10 pages.
Restriction Requirement for U.S. Appl. No. 13/584,534 mailed on Jul. 21, 2014, 5 pages.
Non-Final Office Action for U.S. Appl. No. 13/584,534 mailed on Oct. 24, 2014, 24 pages.
Final Office Action for U.S. Appl. No. 13/584,534 mailed on Jun. 25, 2015, 21 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Dec. 10, 2009, 15 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Jun. 2, 2010, 14 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Nov. 26, 2010, 16 pages.
Final Office Action for U.S. Appl. No. 11/827,524 mailed on May 6, 2011, 19 pages.
Advisory Action for U.S. Appl. No. 11/827,524 mailed on Jul. 14, 2011, 5 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 mailed on Oct. 18, 2012, 24 pages.
Notice of Allowance for U.S. Appl. No. 11/827,524 mailed Jun. 25, 2013, 11 pages.
Non-Final Office Action for U.S. Appl. No. 14/030,782 mailed on Oct. 6, 2014, 14 pages.
Notice of Allowance for U.S. Appl. No. 14/030,782, mailed on Nov. 16, 2015, 20 pages.
Notice of Allowance for U.S. Appl. No. 13/584,534, mailed on Dec. 16, 2015, 7 pages.
Non-Final Office Action for U.S. Appl. No. 15/043,431, mailed on Apr. 13, 2016, 18 pages.
Notice of Allowance for U.S. Appl. No. 15/043,421, mailed on Jun. 27, 2016, 5 pages.
Non-Final Office Action for U.S. Appl. No. 14/603,304, mailed on Aug. 1, 2016, 9 pages.
Related Publications (1)
Number Date Country
20150180802 A1 Jun 2015 US
Provisional Applications (1)
Number Date Country
61919244 Dec 2013 US