Techniques for user-defined tagging of traffic in a network visibility system

Information

  • Patent Grant
  • 9866478
  • Patent Number
    9,866,478
  • Date Filed
    Wednesday, September 9, 2015
    9 years ago
  • Date Issued
    Tuesday, January 9, 2018
    7 years ago
Abstract
In one embodiment, a data plane component of the network visibility system can receive a data packet tapped from a source network. The data plane component can further match the data packet with an entry in a rule table, where the entry includes one or more match parameters, and in response to the matching can tag the data packet with a zone identifier defined in the entry. The data plane component can then forward the tagged data packet to an analytic server for analysis.
Description
BACKGROUND

Unless expressly indicated herein, the material presented in this section is not prior art to the claims of the present application and is not admitted to be prior art by inclusion in this section.


General Packet Radio Service (GPRS) is a standard for wireless data communications that allows 3G and 4G/LTE mobile networks to transmit Internet Protocol (IP) packets to external networks such as the Internet. FIG. 1 is a simplified diagram of an exemplary 3G network 100 that makes use of GPRS. As shown, 3G network 100 includes a mobile station (MS) 102 (e.g., a cellular phone, tablet, etc.) that is wirelessly connected to a base station subsystem (BSS) 104. BSS 104 is, in turn, connected to a serving GPRS support node (SGSN) 106, which communicates with a gateway GPRS support node (GGSN) 108 via a GPRS core network 110. Although only one of each of these entities is depicted in FIG. 1, it should be appreciated that any number of these entities may be supported. For example, multiple MSs 102 may connect to each BSS 104, and multiple BSSs 104 may connect to each SGSN 106. Further, multiple SGGNs 106 may interface with multiple GGSNs 108 via GPRS core network 110.


When a user wishes to access Internet 114 via MS 102, MS 102 sends a request message (known as an “Activate PDP Context” request) to SGSN 106 via BSS 104. In response to this request, SGSN 106 activates a session on behalf of the user and exchanges GPRS Tunneling Protocol (GTP) control packets (referred to as “GTP-C” packets) with GGSN 108 in order to signal session activation (as well as set/adjust certain session parameters, such as quality-of-service, etc.). The activated user session is associated with a tunnel between SGSN 106 and GGSN 108 that is identified by a unique tunnel endpoint identifier (TEID). In a scenario where MS 102 has roamed to BSS 104 from a different BSS served by a different SGSN, SGSN 106 may exchange GTP-C packets with GGSN 108 in order to update an existing session for the user (instead of activating a new session).


Once the user session has been activated/updated, MS 102 transmits user data packets (e.g., IPv4, IPv6, or Point-to-Point Protocol (PPP) packets) destined for an external host/network to BSS 104. The user data packets are encapsulated into GTP user, or “GTP-U,” packets and sent to SGSN 106. SGSN 106 then tunnels, via the tunnel associated with the user session, the GTP-U packets to GGSN 108. Upon receiving the GTP-U packets, GGSN 108 strips the GTP header from the packets and routes them to Internet 114, thereby enabling the packets to be delivered to their intended destinations.


The architecture of a 4G/LTE network that makes uses of GPRS is similar in certain respects to 3G network 100 of FIG. 1. However, in a 4G/LTE network, BSS 104 is replaced by an eNode-B, SGSN 106 is replaced by a mobility management entity (MME) and a Serving Gateway (SGW), and GGSN 108 is replaced by a packet data network gateway (PGW).


For various reasons, an operator of a mobile network such as network 100 of FIG. 1 may be interested in analyzing traffic flows within the network. For instance, the operator may want to collect and analyze flow information for network management or business intelligence/reporting. Alternatively or in addition, the operator may want to monitor traffic flows in order to, e.g., detect and thwart malicious network attacks.


To facilitate these and other types of analyses, the operator can implement a network telemetry, or “visibility,” system, such as system 200 shown in FIG. 2 according to an embodiment. At a high level, network visibility system 200 can intercept traffic flowing through one or more connected networks (in this example, GTP traffic between SGSN-GGSN pairs in a 3G network 206 and/or GTP traffic between eNodeB/MME-SGW pairs in a 4G/LTE network 208) and can intelligently distribute the intercepted traffic among a number of analytic servers 210(1)-(M). Analytic servers 210(1)-(M), which may be operated by the same operator/service provider as networks 206 and 208, can then analyze the received traffic for various purposes, such as network management, reporting, security, etc.


In the example of FIG. 2, network visibility system 200 comprises two components: a GTP Visibility Router (GVR) 202 and a GTP Correlation Cluster (GCC) 204. GVR 202 can be considered the data plane component of network visibility system 200 and is generally responsible for receiving and forwarding intercepted traffic (e.g., GTP traffic tapped from 3G network 206 and/or 4G/LTE network 208) to analytic servers 210(1)-(M).


GCC 204 can be considered the control plane of network visibility system 200 and is generally responsible for determining forwarding rules on behalf of GVR 202. Once these forwarding rules have been determined, GCC 204 can program the rules into GVR 202's forwarding tables (e.g., content-addressable memories, or CAMs) so that GVR 202 can forward network traffic to analytic servers 210(1)-(M) according to customer (e.g., network operator) requirements. As one example, GCC 204 can identify and correlate GTP-U packets that belong to the same user session but include different source (e.g., SGSN) IP addresses. Such a situation may occur if, e.g., a mobile user starts a phone call in one wireless access area serviced by one SGSN and then roams, during the same phone call, to a different wireless access area serviced by a different SGSN. GCC 204 can then create and program “dynamic” forwarding rules in GVR 202 that ensure these packets (which correspond to the same user session) are all forwarded to the same analytic server for consolidated analysis.


Additional details regarding an exemplary implementation of network visibility system 200, as well as the GTP correlation processing attributed to GCC 204, can be found in commonly-owned U.S. patent application Ser. No. 14/603,304, entitled “SESSION-BASED PACKET ROUTING FOR FACILITATING ANALYTICS,” the entire contents of which are incorporated herein by reference for all purposes.


In certain embodiments, as part of the traffic analysis performed by analytic servers 210(1)-(M), servers 210(1)-(M) may be interested in categorizing the data packets they receive from GVR 202 according to various criteria. For instance, analytic servers 210(1)-(M) may want to categorize the data packets based on the physical network path, or “circuit,” they originated from in 3G network 206 or 4G/LTE network 208. Analytic servers 210(1)-(M) can then use this information to facilitate their analyses. By way of example, assume that an analytic server 210 sees that data packets in a certain flow are being dropped or delayed. In this case, if the data packets are categorized according to their point of origin in source network 306 or 208, the analytic server can determine that there is a problem with the physical network path/circuit from which the affected data packets originated. The network provider can thereafter take appropriate steps for addressing the problem with that particular circuit.


One issue with performing this packet categorization on analytic servers 210(1)-(M) is that the analytic servers may not have sufficient compute resources to perform the categorization in an efficient manner. This may be particularly true if a high volume of traffic is sent from GVR 202 to servers 210(1)-(M) on a continuous basis. Another issue with performing this packet categorization on analytic servers 210(1)-(M) is that the analytic servers may not have access to all of the information needed to successfully carry out the categorization task. For instance, in the example above where analytic servers 210(1)-(M) are interested in categorizing data packets according to the network circuit their originated from in source networks 206/208, servers 210(1)-(M) may not be able to ascertain the source circuit for each data packet without knowing which ingress port of GVR 202 received the packet. This ingress port information would only be known to the components of network visibility system 200 (i.e., GVR 202 and GCC 204).


SUMMARY

Techniques for enabling user-defined tagging of traffic in a network visibility system are provided. In one embodiment, a data plane component of the network visibility system can receive a data packet tapped from a source network. The data plane component can further match the data packet with an entry in a rule table, where the entry includes one or more match parameters, and in response to the matching can tag the data packet with a zone identifier defined in the entry. The data plane component can then forward the tagged data packet to an analytic server for analysis.


The following detailed description and accompanying drawings provide a better understanding of the nature and advantages of particular embodiments.





BRIEF DESCRIPTION OF DRAWINGS


FIG. 1 depicts an exemplary 3G network.



FIG. 2 depicts a network visibility system according to an embodiment.



FIG. 3 depicts an architecture and runtime workflow for a specific network visibility system implementation according to an embodiment.



FIG. 4 depicts a modified version of the workflow of FIG. 3 that supports user-defined packet tagging according to an embodiment.



FIG. 5 depicts a high-level flowchart for performing user-defined tagging of traffic in a network visibility system according to an embodiment.



FIG. 6 depicts a network switch/router according to an embodiment.



FIG. 7 depicts a computer system according to an embodiment.





DETAILED DESCRIPTION

In the following description, for purposes of explanation, numerous examples and details are set forth in order to provide an understanding of various embodiments. It will be evident, however, to one skilled in the art that certain embodiments can be practiced without some of these details, or can be practiced with modifications or equivalents thereof.


1. Overview


Embodiments of the present disclosure provide techniques for performing user-defined tagging of traffic that is received by a data plane component of a network visibility system (e.g., GVR 202 of system 200) and forwarded to one or more analytic servers (e.g., servers 210(1)-(M)). In one set of embodiments, a user of the network visibility system can define a set of rules that assign “zone identifiers” to incoming packets at the data plane component based on various criteria (e.g., the ingress ports on which the packets are received, source IP addresses, destination IP addresses, etc.). Each zone identifier can correspond to a packet categorization, or class, that is known to the analytic severs. For example, in a particular embodiment, each zone identifier can identify a physical network circuit from which the packet originated in a tapped source network (e.g., network 206 or 208 of FIG. 2). The rules comprising the zone identifiers can be maintained on the control plane component of the network visibility system and can be programmed into an appropriate rule table (referred to herein as a “zoning table”) of the data plane component at, e.g., system boot-up/initialization.


Then, during runtime of the network visibility system, the data plane component can receive a data packet from a tapped source network and can attempt to match the data packet against the rules in the zoning table. If a match is made, the data plane component can tag the data packet with the zone identifier included in the matched rule. The data plane component can insert the zone identifier into any of a number of existing fields in the data packet. For example, in embodiments where the data packet is a GTP packet, the data plane component can insert the zone identifier in an inner VLAN ID field of the GTP packet. Finally, the data plane component can send the tagged data packet through the remainder of its forwarding pipeline, which causes the data packet to be forwarded to a particular analytic server. Upon receiving the tagged data packet, the analytic server can extract the zone identifier from the packet and use the extracted zone identifier to assign an appropriate categorization/classification to the packet for analysis purposes.


With the approach described above, the data plane component of the network visibility system (in cooperation with the control plane component) can efficiently handle the task of categorizing incoming traffic via the user-defined zone identifiers, and can communicate these categorizations (in the form of the packet tags) to the analytic servers. Accordingly, there is no need for the analytic servers to dedicate compute resources to this task; the analytic servers need only extract the zone identifiers from the received packets and apply them (if needed) as part of their analytic processing. Further, by performing this categorization on the data plane component rather than the analytic servers, the rules that are used to match zone identifiers to data packets can take advantage of information that is available to the data plane component, but may not be readily available to the analytic servers (such as the ingress ports of the data plane component on which the packets are received). Thus, this approach can enable certain types of packet categorization that otherwise would not be possible if performed solely on the analytic servers.


These and other aspects of the present disclosure are described in further detail in the sections that follow.


2. Network Visibility System Architecture and Runtime Workflow


To provide context for the user-defined tagging techniques of the present disclosure, FIG. 3 depicts a more detailed representation of the architecture of network visibility system 200 of FIG. 2 (shown as network visibility system 300) and an exemplary runtime workflow that may be performed within system 300 according to an embodiment.


As shown in FIG. 3, network visibility system 300 comprises a GVR 302 and GCC 304. GVR 302 internally includes an ingress card 306, a whitelist card 308, a service card 310, and an egress card 312. In a particular embodiment, each card 306-312 represents a separate line card or I/O module in GVR 302. Ingress card 306 comprises a number of ingress (i.e., “GVIP”) ports 314(1)-(N), which are communicatively coupled with one or more 3G and/or 4G/LTE mobile networks (e.g., networks 206 and 208 of FIG. 2). Further, egress card 312 comprises a number of egress (i.e., “GVAP”) ports 316(1)-(M), which are communicatively coupled with one or more analytic servers (e.g., servers 210(1)-(M) of FIG. 2). Although only a single instance of ingress card 306, whitelist card 308, service card 310, and egress card 312 are shown, it should be appreciated that any number of these cards may be supported.


In operation, GVR 302 can receive an intercepted (i.e., tapped) network packet from 3G network 206 or 4G/LTE network 208 via a GVIP port 314 of ingress card 306 (step (1)). At steps (2) and (3), ingress card 306 can remove the received packet's MPLS headers and determine whether the packet is a GTP packet (i.e., a GTP-C or GTP-U packet) or not. If the packet is not a GTP packet, ingress card 306 can match the packet against a “Gi” table that contains forwarding rules (i.e., entries) for non-GTP traffic (step (4)). Based on the Gi table, ingress card 306 can forward the packet to an appropriate GVAP port 316 for transmission to an analytic server (e.g., an analytic server that has been specifically designated to process non-GTP traffic) (step (5)).


On the other hand, if the packet is a GTP packet, ingress card 306 forward the packet to whitelist card 308 (step (6)). At steps (7) and (8), whitelist card 308 can attempt to match the inner IP addresses (e.g., source and/or destination IP addresses) of the GTP packet against a “whitelist” table. The whitelist table, which may be defined by the network operator, comprises entries identifying certain types of GTP traffic that the network operator does not want to be sent to analytic servers 210 for processing. For example, the network operator may consider such traffic to be innocuous or irrelevant to the analyses performed by analytic servers 210. If a match is made at step (8), then the GTP packet is immediately dropped (step (9)). Otherwise, the GTP packet is forwarded to an appropriate service instance port (GVSI port) of service card 310 (step (10)). Generally speaking, service card 310 can host one or more service instances, each of which is identified by a “GVSI port” and is responsible for processing some subset of the incoming GTP traffic from 3G network 206 and 4G/LTE network 208 (based on, e.g., GGSN/SGW). In a particular embodiment, service card 310 can host a separate service instance (and GVSI port) for each hardware packet processor implemented on service card 310.


At steps (11) and (12), service card 310 can receive the GTP packet on the GVSI port and can attempt to match the packet against a “GCL” table defined for the service instance. The GCL table can include forwarding entries that have been dynamically created by GCC 304 for ensuring that GTP packets belonging to the same user session are all forwarded to the same analytic server (this is the correlation concept described in the Background section). The GCL table can also include default forwarding entries. If a match is made at step (12) with a dynamic GCL entry, service card 310 can forward the GTP packet to a GVAP port 316 based on the dynamic entry (step (13)). On the other hand, if no match is made with a dynamic entry, service card 310 can forward the GTP packet to a GVAP port 316 based on a default GCL entry (step (14)). For example, the default rule or entry may specify that the packet should be forwarded to a GVAP port that is statically mapped to a GGSN or SGW IP address associated with the packet.


In addition to performing the GCL matching at step (12), service card 310 can also determine whether the GTP packet is a GTP-C packet and, if so, can transmit a copy of the packet to GCC 304 (step (15)). Alternatively, this transmission can be performed by whitelist card 308 (instead of service card 310). In a particular embodiment, service card 310 or whitelist card 308 can perform this transmission via a separate minor port, or “GVMP,” 318 that is configured on GVR 302 and connected to GCC 304. Upon receiving the copy of the GTP-C packet, GCC 304 can parse the packet and determine whether the GTP traffic for the user session associated with the current GTP-C packet will still be sent to the same GVAP port or not (step (16)). As mentioned previously, in cases where a user roams, the SSGN source IP addresses for GTP packets in user session may change, potentially leading to a bifurcation of that traffic to two or more GVAP ports (and thus, two or more different analytic servers). If the GVAP port has changed, GCC 304 can determine a new dynamic GCL entry that ensures all of the GTP traffic for the current user session is sent to the same GVAP port. GCC 304 can then cause this new dynamic GCL entry to be programmed into the dynamic GCL table of service card 310 (step (17)). Thus, all subsequent GTP traffic for the same user session will be forwarded based on this new entry at steps (11)-(13).


3. User-Defined Tagging


As mentioned previously, once analytic servers 210 have received the GTP packets forwarded by network visibility system 300, the analytic servers can analyze the packets for various purposes (e.g., network management, reporting, security, etc.). As part of this process, analytic servers 210 may find it useful to categorize the received packets according to one or more criteria. For example, in one embodiment, analytic servers 210 may wish to categorize the packets based on the network circuit that they originated from in networks 206/208. Unfortunately, this categorization task can place a large amount of strain on servers 210, and may require access to information, such as network flow characteristics, that are only available to the components of network visibility system 300.


To address this, GVR 302 can implement a modified runtime workflow that supports user-defined packet tagging. This modified workflow is depicted in FIG. 4 according to an embodiment. With the workflow shown in FIG. 4, GVR 302 can tag the packets received from 3G and 4G/LTE networks 206 and 208 with “zone identifiers” as the packets are being passed through the GVR forwarding pipeline. These zone identifiers, which can be defined by a user (e.g., a network operator or administrator), can correspond to packet categorizations, or classes, that are known to analytic servers 210. GVR 302 can then forward the tagged packets to analytic servers 210. Upon receiving a tagged packet from GVR 302, each analytic server can extract the zone identifier from the packet and leverage the zone identifier (if needed) for its analytic processing. For instance, in embodiments where the zone identifiers identify the circuit of origin of each packet, the analytic servers can use the zone identifiers to correlate problematic packets/packet flows with their associated network circuits. In this way, the analytic servers can take appropriate steps to address any network problems that may exist with those circuits.


Steps (1)-(5) in the runtime workflow of FIG. 4 are substantially similar to the runtime workflow of FIG. 3. At step (6) of FIG. 4, if ingress card 306 determines that the received packet is a GTP packet, ingress card 306 can attempt to match the packet against a “zoning table” that is resident on ingress card 306, rather than sending the packet directly to whitelist card 308. This zoning table (which may be implemented using, e.g., a TCAM on ingress card 306) can comprise entries corresponding to “zoning” rules. Each zoning entry/rule in the zoning table can include one or more match parameters that are used to match the entry/rule against incoming packets, as well as a zone identifier to be added (i.e., tagged) to a matched packet. Thus, the match parameters represent the criteria that a given packet must satisfy in order to be tagged with the corresponding zone identifier. The zone identifiers represent different packet categorizations that are understood by analytic servers 210, such as circuit of origin, application type, quality of service, etc.


Generally speaking, the match parameters and zone identifiers for each entry/rule in the zoning table will be user-defined (by, e.g., a network operator or administrator) and will differ depending on the particular categorization task that needs to be performed by GVR 302. For example, in a scenario where the network operator desires incoming packets to be categorized based their circuit of origin in source network 206 or 208, the network operator can define a plurality of zoning entries/rules where the match parameters for each entry/rule includes (1) an ingress port of GVR 302 on which a packet is received, and (2) a source IP address prefix or a destination IP address prefix. In this example, the source IP address prefix can identify a range of GGSNs or SGWs from which the packet originated, and the destination IP address prefix can identify a range of GGSNs or SGWs to which the packet is directed. By matching against these parameters, GVR 302 can classify incoming packets based on the portion of network 206/208 from which the packet was tapped, as well as the direction of travel of the packet (i.e., whether the packet was travelling upstream towards the GGSN/SGW, or downstream away from the GGSN/SGW). GVR 302 can then tag each matched packet with a unique zone identifier associated with that network portion and travel direction (per the zoning entry/rule). In one embodiment, GVR 302 can include the zone identifier in an inner VLAN ID field of the packet. In other embodiments, GVR 302 can include in the zone identifier in any other portion of the packet that will be accessible by analytic servers 210.


It should be noted that, in certain embodiments, the rules/entries in the zoning table can originate from user-defined configuration data that is maintained on GCC 304. These rules can be communicated from GCC 304 to GVR 302 (in the form of, e.g., a zoning access control list, or ACL) and programmed into the zoning table at the time GVR 302 and GCC 304 are initialized/booted up. Additional information regarding the format for this configuration data is provided in subsection 3.1 below.


In addition, each entry/rule in the zoning table can include other fields beyond the match parameters and zone identifier mentioned above. For example, in the example of FIG. 4, each zoning entry/rule can further include a GVSI port that identifies the service instance of GVR 302 that should process the matched packet (i.e., determine which egress port, and thus analytic server, the packet should be forwarded to), well as a whitelist port that identifies the ingress port of whitelist card 308. In this embodiment, GVR 302 can include the GVSI port in an outer VLAN ID field of the packet.


At step (7), once the packet has been matched with an entry/rule in the zoning table and tagged as described above, ingress card 306 can send the tagged packet to whitelist card 308 (based on, e.g., the whitelist port identified in the matched zoning entry/rule) and through the remaining portions of GVR 302's forwarding pipeline in accordance with steps (8)-(18), which are substantially similar to steps (6)-(17) of FIG. 3. At the end of this workflow, the tagged packet will be forwarded out one of the GVAP ports 416 to a particular analytic server 210 for analysis. Upon receiving the tagged packet, the analytic server can extract the zone identifier from the packet and use the zone identifier, as needed, for categorizing the packet as part of its analytic processing.


It should be appreciated that the workflow shown in FIG. 4 is illustrative and various modifications are possible. For example, although step (6) (i.e., matching the packet against the zoning table and tagging the packet) is shown as being performed by ingress card 306, in alternative embodiments this step can be performed by another line card on GVR 302, such as whitelist card 308, service card 310, egress card 312, or a standalone “zoning card” (not shown). For example, such a standalone card can be inserted in the path between whitelist card 308 and service card 310. By performing tagging in a standalone card that is positioned after whitelist card 308, dropped packets would no longer be tagged, thus reducing the processing performed by GVR 302 and resulting in an improvement in the performance of GVR 302. One of ordinary skill in the art will recognize other modifications, variations, and alternatives.



FIG. 5 depicts a high-level flowchart 500 of the tagging functionality attributed to GVR 302 in FIG. 4 according to one embodiment. Flowchart 500 assumes that the zoning table of the GVR has been programmed with appropriate zoning entries/rules as described with respect to FIG. 4.


Starting with block 502, GVR 302 can receive a data packet tapped from a source network (e.g., 3G network 206 or 4G/LTE network 208). The packet can be received via one of the ingress (GVIP) ports of the GVR.


At block 504, GVR 302 can attempt to match the data packet with a plurality of entries in a zoning table maintained on the GVR. As mentioned previously, each zoning entry can be user-defined and can include a plurality of match parameters and a zone identifier. As part of block 504, GVR 302 can attempt to match the match parameters in each zoning entry against corresponding fields in the received data packet.


If a match is made, GVR 302 can include the zone identifier in the matched zoning entry in the data packet (in other words, tag the packet with the zone identifier) (blocks 506 and 508). In embodiments where the packet is a GTP packet, GVR 302 can include the zone identifier in an inner VLAN ID field of the GTP packet. In other embodiments, GVR 302 can include the zone identifier in any other packet field. GVR 302 can then cause the tagged data packet to be forwarded to an appropriate analytic server for processing (block 510). Although not shown, once the analytic server has received the tagged data packet, the server can retrieve the zone identifier and apply it as part of its analytic workflows. The particular manner in which the analytic server uses the zone identifier will differ depending on the identifier's nature and purpose. For instance, in cases where the zone identifier identifies a physical network circuit from which the packet originated, the analytic server can leverage the zone identifier to troubleshoot problems in that network circuit.


If no match is made at block 506, GVR 302 can simply forward the packet (without performing any tagging) to the analytic server per block 510 and flowchart 500 can end.


3.1 Zoning Entry/Rule Configuration Format


As noted previously, in certain embodiments the zoning entries/rules installed in the zoning table of GVR 302 can be generated from user-defined configuration data that is maintained on GCC 304. This user-defined configuration data can include, for each zoning entry/rule, a text string that defines the components of the rule. The following is an example format of such a text string for a zoning entry/rule according to an embodiment:


<Ingres sPort>, <SIP>, <SIPMask>, <DIP>, <DIPMask>, <Version>, <EgressVLAN>


As shown in this example, the text string includes an ingress port (IngressPort) field, a source IP address (SIP) field, a source IP Mask (SIPMask) field, a destination IP address (DIP) field, a destination IP Mask (DIPMask) field, a network version (Version) field, and a zone identifier (“EgressVLAN”) field. The IngressPort, SIP, SIPMask, DIP, and DIPMask fields correspond to match parameters for this zoning entry/rule. In particular, the IngressPort field identifies an ingress port of GVR 302 on which a data packet may be received, the SIP and SIPMask fields identify a source IP address (or address range/prefix) for the packet, and the DIP and DIPMask fields identify a destination IP address (or address range/prefix) for the packet. If the corresponding fields of an incoming packet match these fields, the packet is tagged with the zone identifier included in the EgressVLAN field. In some cases, wildcard values can be provided for certain match fields, such that any values in an incoming packet will match the wildcarded fields.


The Version field indicates the type of network traffic to which this zoning entry/rule is applicable. In a particular embodiment, the Version field can be set to “1” if the entry/rule is applicable to 3G networks, set to “0” if the entry/rule is applicable to 4G/LTE networks, and set to “2” if the entry/rule is applicable to both 3G and 4G/LTE networks.


It should be appreciated that the zoning rule configuration format described above is illustrative and may vary depending on the nature of the packet categorization being performed via these entries/rules.


4. Network Switch



FIG. 6 depicts an exemplary network switch 600 according to an embodiment. Network switch 600 can be used to implement, e.g., GVR 202/302 of FIGS. 2, 3, and 4.


As shown, network switch 600 includes a management module 602, a switch fabric module 604, and a number of I/O modules (i.e., line cards) 606(1)-606(N). Management module 602 includes one or more management CPUs 608 for managing/controlling the operation of the device. Each management CPU 608 can be a general purpose processor, such as a PowerPC, Intel, AMD, or ARM-based processor, that operates under the control of software stored in an associated memory (not shown).


Switch fabric module 404 and I/O modules 606(1)-606(N) collectively represent the data, or forwarding, plane of network switch 600. Switch fabric module 604 is configured to interconnect the various other modules of network switch 600. Each I/O module 606(1)-606(N) can include one or more input/output ports 610(1)-610(N) that are used by network switch 600 to send and receive data packets. Each I/O module 606(1)-606(N) can also include a packet processor 612(1)-612(N). Packet processor 612(1)-612(N) is a hardware processing component (e.g., an FPGA or ASIC) that can make wire speed decisions on how to handle incoming or outgoing data packets. In a particular embodiment, I/O modules 606(1)-606(N) can be used to implement the various types of line cards described with respect to GVR 302 in FIGS. 3 and 4 (e.g., ingress card 306, whitelist card 308, service card 310, and egress card 312).


It should be appreciated that network switch 600 is illustrative and not intended to limit embodiments of the present invention. Many other configurations having more or fewer components than switch 600 are possible.


5. Computer System



FIG. 7 is a simplified block diagram of a computer system 700 according to an embodiment. Computer system 700 can be used to implement, e.g., GCC 204/304 of FIGS. 2, 3, and 4. As shown in FIG. 7, computer system 700 can include one or more processors 702 that communicate with a number of peripheral devices via a bus subsystem 704. These peripheral devices can include a storage subsystem 706 (comprising a memory subsystem 708 and a file storage subsystem 710), user interface input devices 712, user interface output devices 714, and a network interface subsystem 716.


Bus subsystem 704 can provide a mechanism for letting the various components and subsystems of computer system 700 communicate with each other as intended. Although bus subsystem 704 is shown schematically as a single bus, alternative embodiments of the bus subsystem can utilize multiple busses.


Network interface subsystem 716 can serve as an interface for communicating data between computer system 700 and other computing devices or networks. Embodiments of network interface subsystem 716 can include wired (e.g., coaxial, twisted pair, or fiber optic Ethernet) and/or wireless (e.g., Wi-Fi, cellular, Bluetooth, etc.) interfaces.


User interface input devices 712 can include a keyboard, pointing devices (e.g., mouse, trackball, touchpad, etc.), a scanner, a barcode scanner, a touch-screen incorporated into a display, audio input devices (e.g., voice recognition systems, microphones, etc.), and other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and mechanisms for inputting information into computer system 700.


User interface output devices 714 can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices, etc. The display subsystem can be a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), or a projection device. In general, use of the term “output device” is intended to include all possible types of devices and mechanisms for outputting information from computer system 700.


Storage subsystem 706 can include a memory subsystem 708 and a file/disk storage subsystem 710. Subsystems 708 and 710 represent non-transitory computer-readable storage media that can store program code and/or data that provide the functionality of various embodiments described herein.


Memory subsystem 708 can include a number of memories including a main random access memory (RAM) 718 for storage of instructions and data during program execution and a read-only memory (ROM) 720 in which fixed instructions are stored. File storage subsystem 710 can provide persistent (i.e., non-volatile) storage for program and data files and can include a magnetic or solid-state hard disk drive, an optical drive along with associated removable media (e.g., CD-ROM, DVD, Blu-Ray, etc.), a removable flash memory-based drive or card, and/or other types of storage media known in the art.


It should be appreciated that computer system 700 is illustrative and not intended to limit embodiments of the present invention. Many other configurations having more or fewer components than computer system 700 are possible.


The above description illustrates various embodiments of the present invention along with examples of how aspects of the present invention may be implemented. The above examples and embodiments should not be deemed to be the only embodiments, and are presented to illustrate the flexibility and advantages of the present invention as defined by the following claims. For example, although GVR 202/302 and GCC 204/304 have generally been described as separate and distinct devices in network visibility system 200/300, in certain embodiments GVR 202/302 and GCC 204/304 can be implemented in the context of a single device. For instance, in one embodiment, GVR 202/302 and GCC 204/304 can be implemented as components in a single network switch/router (such as switch 600 of FIG. 6). In another embodiment, GVR 202/302 and GCC 204/304 can be implemented as components (e.g., virtual machines) within a single computer system (such as computer system 700 of FIG. 7). One of ordinary skill in the art will recognize many variations and modifications for the arrangement of network visibility system 200/300.


Further, although certain embodiments have been described with respect to particular process flows and steps, it should be apparent to those skilled in the art that the scope of the present invention is not strictly limited to the described flows and steps. Steps described as sequential may be executed in parallel, order of steps may be varied, and steps may be modified, combined, added, or omitted.


Yet further, although certain embodiments have been described using a particular combination of hardware and software, it should be recognized that other combinations of hardware and software are possible, and that specific operations described as being implemented in software can also be implemented in hardware and vice versa.


The specification and drawings are, accordingly, to be regarded in an illustrative rather than restrictive sense. Other arrangements, embodiments, implementations and equivalents will be evident to those skilled in the art and may be employed without departing from the spirit and scope of the invention as set forth in the following claims.

Claims
  • 1. A method comprising: receiving, by a data plane component of a network visibility system, a data packet tapped from a source network;matching, by the data plane component, the data packet with an entry in a rule table, the entry including one or more match parameters;in response to the matching, tagging, by the data plane component, the data packet with a zone identifier defined in the entry, the zone identifier being a user-defined identifier that is used by an analytic server for categorizing the data packet; andforwarding, by the data plane component, the data packet with the zone identifier to the analytic server for analysis.
  • 2. The method of claim 1 wherein the zone identifier identifies a physical circuit of the source network from which the data packet originated.
  • 3. The method of claim 2 wherein the zone identifier further identifies whether the data packet was traveling in an upstream or downstream direction on the physical circuit.
  • 4. The method of claim 1 wherein the data packet is a GPRS Tunneling Protocol (GTP) packet.
  • 5. The method of claim 4 wherein tagging the data packet with the zone identifier comprises adding the zone identifier to an inner VLAN ID field of the GTP packet.
  • 6. The method of claim 1 wherein the one or more match parameters include an ingress port of the data plane component.
  • 7. The method of claim 6 wherein the one or more match parameters further include a source IP address prefix or a destination IP address prefix.
  • 8. The method of claim 7 wherein the source network is a 3G or 4G/LTE wireless network, and wherein the source IP address prefix or destination IP address prefix corresponds to one or more gateway GPRS support nodes (GGSNs) or serving gateways (SGWs) in the source network.
  • 9. The method of claim 1 wherein the entry is generated by a control plane component of the network visibility system and is communicated to the data plane component upon initialization.
  • 10. The method of claim 9 wherein the entry is generated based on a user-defined configuration file that is maintained on the control plane component.
  • 11. The method of claim 1 wherein the data plane component is a network switch or router, and wherein the rule table is resident on an ingress line card of the network switch or router.
  • 12. A non-transitory computer readable storage medium having stored thereon program code executable by a data plane component of a network visibility system, the program code causing the data plane component to: receive a data packet tapped from a source network;match the data packet with an entry in a rule table, the entry including one or more match parameters;in response to the matching, tagging the data packet with a zone identifier defined in the entry, the zone identifier being a user-defined identifier that is used by an analytic server for categorizing the data packet; andforwarding the data packet with the zone identifier to the analytic server for analysis.
  • 13. The non-transitory computer readable storage medium of claim 12 wherein the zone identifier identifies a physical circuit of the source network from which the data packet originated.
  • 14. The non-transitory computer readable storage medium of claim 12 wherein the data packet is a GPRS Tunneling Protocol (GTP) packet, and wherein tagging the data packet with the zone identifier comprises adding the zone identifier to an inner VLAN ID field of the GTP packet.
  • 15. A device operable to act as a data plane component in a network visibility system, the device comprising: a processor; anda non-transitory computer readable medium having stored thereon program code that, when executed by the processor, causes the processor to: receive a data packet tapped from a source network;match the data packet with an entry in a rule table, the entry including one or more match parameters;in response to the matching, tagging the data packet with a zone identifier defined in the entry, the zone identifier being a user-defined identifier that is used by an analytic server for categorizing the data packet; andforwarding the data packet with the zone identifier to the analytic server for analysis.
  • 16. The device of claim 15 wherein the zone identifier identifies a physical circuit of the source network from which the data packet originated.
  • 17. The device of claim 15 wherein the data packet is a GPRS Tunneling Protocol (GTP) packet, and wherein tagging the data packet with the zone identifier comprises adding the zone identifier to an inner VLAN ID field of the GTP packet.
CROSS REFERENCES TO RELATED APPLICATIONS

The present application claims the benefit and priority under 35 U.S.C. 119(e) of U.S. Provisional Application No. 62/137,106, filed Mar. 23, 2015, entitled “TECHNIQUES FOR USER-DEFINED TAGGING OF TRAFFIC IN A NETWORK VISIBILITY SYSTEM.” In addition, the present application is related to the following commonly-owned U.S. patent applications: 1. U.S. application Ser. No. 14,603,304, filed Jan. 22, 2015, now U.S. Pat. No. 9,648,542, issued May 9, 2017, entitled “SESSION-BASED PACKET ROUTING FOR FACILITATING ANALYTICS”;2. U.S. application Ser. No. 14/848,586, filed concurrently with the present application, entitled “TECHNIQUES FOR EXCHANGING CONTROL AND CONFIGURATION INFORMATION IN A NETWORK VISIBILITY SYSTEM”; and3. U.S. application Ser. No. 14/848,645, filed concurrently with the present application, entitled “TECHNIQUES FOR EFFICIENTLY PROGRAMMING FORWARDING RULES IN A NETWORK SYSTEM.” The entire contents of the foregoing provisional and nonprovisional applications are incorporated herein by reference for all purposes.

US Referenced Citations (322)
Number Name Date Kind
5031094 Toegel et al. Jul 1991 A
5359593 Derby et al. Oct 1994 A
5948061 Merriman et al. Sep 1999 A
5951634 Sitbon et al. Sep 1999 A
6006269 Phaal Dec 1999 A
6006333 Nielsen Dec 1999 A
6078956 Bryant et al. Jun 2000 A
6092178 Jindal et al. Jul 2000 A
6112239 Kenner et al. Aug 2000 A
6115752 Chauhan Sep 2000 A
6128279 O'Neil et al. Oct 2000 A
6128642 Doraswamy et al. Oct 2000 A
6148410 Baskey et al. Nov 2000 A
6167445 Gai et al. Dec 2000 A
6167446 Lister et al. Dec 2000 A
6182139 Brendel Jan 2001 B1
6195691 Brown Feb 2001 B1
6205477 Johnson et al. Mar 2001 B1
6233604 Van Horne et al. May 2001 B1
6260070 Shah Jul 2001 B1
6286039 Van Horne et al. Sep 2001 B1
6286047 Ramanathan et al. Sep 2001 B1
6304913 Rune Oct 2001 B1
6324580 Jindal et al. Nov 2001 B1
6327622 Jindal et al. Dec 2001 B1
6336137 Lee et al. Jan 2002 B1
6381627 Kwan et al. Apr 2002 B1
6389462 Cohen et al. May 2002 B1
6427170 Sitaraman et al. Jul 2002 B1
6434118 Kirschenbaum Aug 2002 B1
6438652 Jordan et al. Aug 2002 B1
6446121 Shah et al. Sep 2002 B1
6449657 Stanbach, Jr. et al. Sep 2002 B2
6470389 Chung et al. Oct 2002 B1
6473802 Masters Oct 2002 B2
6480508 Mwikalo et al. Nov 2002 B1
6490624 Sampson et al. Dec 2002 B1
6549944 Weinberg et al. Apr 2003 B1
6567377 Vepa et al. May 2003 B1
6578066 Logan et al. Jun 2003 B1
6606643 Emens et al. Aug 2003 B1
6665702 Zisapel et al. Dec 2003 B1
6671275 Wong et al. Dec 2003 B1
6681232 Sistanizadeh et al. Jan 2004 B1
6681323 Fontanesi et al. Jan 2004 B1
6691165 Bruck et al. Feb 2004 B1
6697368 Chang et al. Feb 2004 B2
6735218 Chang et al. May 2004 B2
6745241 French et al. Jun 2004 B1
6751616 Chan Jun 2004 B1
6754706 Swildens et al. Jun 2004 B1
6772211 Lu et al. Aug 2004 B2
6779017 Lamberton et al. Aug 2004 B1
6789125 Aviani et al. Sep 2004 B1
6821891 Chen et al. Nov 2004 B2
6826198 Turina et al. Nov 2004 B2
6831891 Mansharamani et al. Dec 2004 B2
6839700 Doyle et al. Jan 2005 B2
6850984 Kalkunte et al. Feb 2005 B1
6874152 Vermeire et al. Mar 2005 B2
6879995 Chinta et al. Apr 2005 B1
6898633 Lyndersay et al. May 2005 B1
6901072 Wong May 2005 B1
6901081 Ludwig May 2005 B1
6920498 Gourlay et al. Jul 2005 B1
6928485 Krishnamurthy et al. Aug 2005 B1
6944678 Lu et al. Sep 2005 B2
6963914 Breitbart et al. Nov 2005 B1
6963917 Callis et al. Nov 2005 B1
6985956 Luke et al. Jan 2006 B2
6987763 Rochberger et al. Jan 2006 B2
6996615 McGuire Feb 2006 B1
6996616 Leighton et al. Feb 2006 B1
7000007 Valenti Feb 2006 B1
7009086 Brown et al. Mar 2006 B2
7009968 Ambe et al. Mar 2006 B2
7020698 Andrews et al. Mar 2006 B2
7020714 Kalyanaraman et al. Mar 2006 B2
7028083 Levine et al. Apr 2006 B2
7031304 Arberg et al. Apr 2006 B1
7032010 Swildens et al. Apr 2006 B1
7036039 Holland Apr 2006 B2
7058706 Iyer et al. Jun 2006 B1
7058717 Chao et al. Jun 2006 B2
7062642 Langrind et al. Jun 2006 B1
7086061 Joshi et al. Aug 2006 B1
7089293 Grosner et al. Aug 2006 B2
7095738 Desanti Aug 2006 B1
7117530 Lin Oct 2006 B1
7126910 Sridhar Oct 2006 B1
7127713 Davis et al. Oct 2006 B2
7136932 Schneider Nov 2006 B1
7139242 Bays Nov 2006 B2
7177933 Foth Feb 2007 B2
7177943 Temoshenko Feb 2007 B1
7185052 Day Feb 2007 B2
7187687 Davis et al. Mar 2007 B1
7188189 Karol et al. Mar 2007 B2
7197547 Miller et al. Mar 2007 B1
7206806 Pineau Apr 2007 B2
7215637 Ferguson et al. May 2007 B1
7225272 Kelley et al. May 2007 B2
7240015 Karmouch et al. Jul 2007 B1
7240100 Wein et al. Jul 2007 B1
7254626 Kommula et al. Aug 2007 B1
7257642 Bridger et al. Aug 2007 B1
7260645 Bays Aug 2007 B2
7266117 Davis Sep 2007 B1
7266120 Cheng et al. Sep 2007 B2
7277954 Stewart et al. Oct 2007 B1
7292573 LaVigne et al. Nov 2007 B2
7296088 Padmanabhan et al. Nov 2007 B1
7321926 Zhang et al. Jan 2008 B1
7424018 Gallatin et al. Sep 2008 B2
7436832 Gallatin et al. Oct 2008 B2
7440467 Gallatin et al. Oct 2008 B2
7441045 Skene et al. Oct 2008 B2
7450527 Ashwood Smith Nov 2008 B2
7454500 Hsu et al. Nov 2008 B1
7483374 Nilakantan et al. Jan 2009 B2
7492713 Turner et al. Feb 2009 B1
7506065 LaVigne et al. Mar 2009 B2
7539134 Bowes May 2009 B1
7555562 See et al. Jun 2009 B2
7558195 Kuo et al. Jul 2009 B1
7574508 Kommula Aug 2009 B1
7581009 Hsu et al. Aug 2009 B1
7584301 Joshi Sep 2009 B1
7587487 Gunturu Sep 2009 B1
7606203 Shabtay et al. Oct 2009 B1
7647427 Devarapalli Jan 2010 B1
7657629 Kommula Feb 2010 B1
7690040 Frattura et al. Mar 2010 B2
7706363 Daniel et al. Apr 2010 B1
7716370 Devarapalli May 2010 B1
7720066 Weyman et al. May 2010 B2
7720076 Dobbins et al. May 2010 B2
7746789 Katoh et al. Jun 2010 B2
7747737 Apte et al. Jun 2010 B1
7756965 Joshi Jul 2010 B2
7774833 Szeto et al. Aug 2010 B1
7787454 Won et al. Aug 2010 B1
7792047 Gallatin et al. Sep 2010 B2
7835348 Kasralikar Nov 2010 B2
7835358 Gallatin et al. Nov 2010 B2
7840678 Joshi Nov 2010 B2
7848326 Leong et al. Dec 2010 B1
7889748 Leong et al. Feb 2011 B1
7899899 Joshi Mar 2011 B2
7940766 Olakangil et al. May 2011 B2
7953089 Ramakrishnan et al. May 2011 B1
8018943 Pleshek et al. Sep 2011 B1
8208494 Leong Jun 2012 B2
8238344 Chen et al. Aug 2012 B1
8239960 Frattura et al. Aug 2012 B2
8248928 Wang et al. Aug 2012 B1
8270845 Cheung et al. Sep 2012 B2
8315256 Leong et al. Nov 2012 B2
8386846 Cheung Feb 2013 B2
8391286 Gallatin et al. Mar 2013 B2
8504721 Hsu et al. Aug 2013 B2
8514718 Zijst Aug 2013 B2
8537697 Leong et al. Sep 2013 B2
8570862 Leong et al. Oct 2013 B1
8615008 Natarajan et al. Dec 2013 B2
8654651 Leong et al. Feb 2014 B2
8824466 Won et al. Sep 2014 B2
8830819 Leong et al. Sep 2014 B2
8873557 Nguyen Oct 2014 B2
8891527 Wang Nov 2014 B2
8897138 Yu et al. Nov 2014 B2
8953458 Leong et al. Feb 2015 B2
9155075 Song et al. Oct 2015 B2
9264446 Goldfarb et al. Feb 2016 B2
9270566 Wang et al. Feb 2016 B2
9270592 Sites Feb 2016 B1
9294367 Natarajan et al. Mar 2016 B2
9356866 Sivaramakrishnan May 2016 B1
9380002 Johansson et al. Jun 2016 B2
9479415 Natarajan et al. Oct 2016 B2
9565138 Chen et al. Feb 2017 B2
9648542 Hsu et al. May 2017 B2
20010049741 Skene et al. Dec 2001 A1
20010052016 Skene et al. Dec 2001 A1
20020009081 Sampath Jan 2002 A1
20020018796 Wironen Feb 2002 A1
20020023089 Woo Feb 2002 A1
20020026551 Kamimaki et al. Feb 2002 A1
20020038360 Andrews et al. Mar 2002 A1
20020055939 Nardone et al. May 2002 A1
20020059170 Vange May 2002 A1
20020059464 Hata et al. May 2002 A1
20020062372 Hong et al. May 2002 A1
20020078233 Biliris et al. Jun 2002 A1
20020091840 Pulier et al. Jul 2002 A1
20020112036 Bohannon et al. Aug 2002 A1
20020120743 Shabtay et al. Aug 2002 A1
20020124096 Loguinov et al. Sep 2002 A1
20020133601 Kennamer et al. Sep 2002 A1
20020150048 Ha et al. Oct 2002 A1
20020154600 Ido et al. Oct 2002 A1
20020188862 Trethewey et al. Dec 2002 A1
20020194324 Guha Dec 2002 A1
20020194335 Maynard Dec 2002 A1
20030023744 Sadot et al. Jan 2003 A1
20030031185 Kikuchi et al. Feb 2003 A1
20030035430 Islam et al. Feb 2003 A1
20030065711 Acharya et al. Apr 2003 A1
20030065763 Swildens et al. Apr 2003 A1
20030105797 Dolev et al. Jun 2003 A1
20030115283 Barbir et al. Jun 2003 A1
20030135509 Davis et al. Jul 2003 A1
20030202511 Sreejith et al. Oct 2003 A1
20030210686 Terrell et al. Nov 2003 A1
20030210694 Jayaraman et al. Nov 2003 A1
20030229697 Borella Dec 2003 A1
20040019680 Chao et al. Jan 2004 A1
20040024872 Kelley et al. Feb 2004 A1
20040032868 Oda et al. Feb 2004 A1
20040064577 Dahlin et al. Apr 2004 A1
20040194102 Neerdaels Sep 2004 A1
20040243718 Fujiyoshi Dec 2004 A1
20040249939 Amini et al. Dec 2004 A1
20040249971 Klinker Dec 2004 A1
20050021883 Shishizuka et al. Jan 2005 A1
20050033858 Swildens et al. Feb 2005 A1
20050060418 Sorokopud Mar 2005 A1
20050060427 Phillips et al. Mar 2005 A1
20050086295 Cunningham et al. Apr 2005 A1
20050149531 Srivastava Jul 2005 A1
20050169180 Ludwig Aug 2005 A1
20050190695 Phaal Sep 2005 A1
20050207417 Ogawa et al. Sep 2005 A1
20050278565 Frattura et al. Dec 2005 A1
20050286416 Shimonishi et al. Dec 2005 A1
20060036743 Deng et al. Feb 2006 A1
20060039374 Belz et al. Feb 2006 A1
20060045082 Fertell et al. Mar 2006 A1
20060143300 See et al. Jun 2006 A1
20070044141 Lor Feb 2007 A1
20070053296 Yazaki et al. Mar 2007 A1
20070171918 Ota Jul 2007 A1
20070195761 Tatar et al. Aug 2007 A1
20070233891 Luby et al. Oct 2007 A1
20080002591 Ueno Jan 2008 A1
20080028077 Kamata Jan 2008 A1
20080031141 Lean et al. Feb 2008 A1
20080089336 Mercier et al. Apr 2008 A1
20080137660 Olakangil et al. Jun 2008 A1
20080159141 Soukup et al. Jul 2008 A1
20080181119 Beyers Jul 2008 A1
20080195731 Harmel et al. Aug 2008 A1
20080225710 Raja et al. Sep 2008 A1
20080304423 Chuang et al. Dec 2008 A1
20090135835 Gallatin et al. May 2009 A1
20090240644 Boettcher et al. Sep 2009 A1
20090262745 Leong et al. Oct 2009 A1
20100011126 Hsu et al. Jan 2010 A1
20100135323 Leong Jun 2010 A1
20100209047 Cheung et al. Aug 2010 A1
20100228974 Watts Sep 2010 A1
20100293296 Hsu et al. Nov 2010 A1
20100325178 Won et al. Dec 2010 A1
20110044349 Gallatin et al. Feb 2011 A1
20110058566 Leong et al. Mar 2011 A1
20110211443 Leong et al. Sep 2011 A1
20110216771 Gallatin et al. Sep 2011 A1
20120023340 Cheung Jan 2012 A1
20120103518 Kakimoto et al. May 2012 A1
20120157088 Gerber et al. Jun 2012 A1
20120201137 Le Faucheur et al. Aug 2012 A1
20120243533 Leong Sep 2012 A1
20120257635 Gallatin et al. Oct 2012 A1
20120275311 Ivershen Nov 2012 A1
20130010613 Cafarelli et al. Jan 2013 A1
20130028072 Addanki Jan 2013 A1
20130034107 Leong et al. Feb 2013 A1
20130156029 Gallatin et al. Jun 2013 A1
20130173784 Wang et al. Jul 2013 A1
20130201984 Wang Aug 2013 A1
20130259037 Natarajan et al. Oct 2013 A1
20130272135 Leong Oct 2013 A1
20140016500 Leong et al. Jan 2014 A1
20140022916 Natarajan et al. Jan 2014 A1
20140029451 Nguyen Jan 2014 A1
20140040478 Hsu et al. Feb 2014 A1
20140101297 Neisinger et al. Apr 2014 A1
20140204747 Yu et al. Jul 2014 A1
20140219100 Pandey et al. Aug 2014 A1
20140233399 Mann Aug 2014 A1
20140321278 Cafarelli et al. Oct 2014 A1
20150009828 Murakami Jan 2015 A1
20150009830 Bisht Jan 2015 A1
20150033169 Lection et al. Jan 2015 A1
20150103824 Tanabe Apr 2015 A1
20150142935 Srinivas et al. May 2015 A1
20150170920 Purayath et al. Jun 2015 A1
20150180802 Chen et al. Jun 2015 A1
20150195192 Vasseur et al. Jul 2015 A1
20150207905 Merchant Jul 2015 A1
20150215841 Hsu et al. Jul 2015 A1
20150256436 Stoyanov et al. Sep 2015 A1
20150263889 Newton Sep 2015 A1
20150281125 Koponen Oct 2015 A1
20150319070 Nachum Nov 2015 A1
20160119234 Valencia Lopez Apr 2016 A1
20160149811 Roch May 2016 A1
20160164768 Natarajan et al. Jun 2016 A1
20160182329 Armolavicius et al. Jun 2016 A1
20160182378 Basavaraja et al. Jun 2016 A1
20160204996 Lindgren et al. Jul 2016 A1
20160248655 Francisco et al. Aug 2016 A1
20160285735 Chen et al. Sep 2016 A1
20160285762 Chen et al. Sep 2016 A1
20160308766 Register et al. Oct 2016 A1
20160373303 Vedam Dec 2016 A1
20160373304 Sharma Dec 2016 A1
20160373351 Sharma et al. Dec 2016 A1
20160373352 Sharma et al. Dec 2016 A1
20170187649 Chen et al. Jun 2017 A1
20170237632 Hegde et al. Aug 2017 A1
20170237633 Hegde et al. Aug 2017 A1
Foreign Referenced Citations (11)
Number Date Country
101677292 Mar 2010 CN
2654340 Oct 2013 EP
3206344 Aug 2017 EP
3206345 Aug 2017 EP
20070438 Feb 2008 IE
201641010295 Mar 2016 IN
201641016960 May 2016 IN
201641035761 Oct 2016 IN
2015138513 Sep 2015 NO
2010135474 Nov 2010 WO
2015116538 Aug 2015 WO
Non-Patent Literature Citations (137)
Entry
Notice of Allowance for U.S. Appl. No. 14/030,782 dated Nov. 16, 2015, 20 pages.
U.S. Appl. No. 14/848,586, filed Sep. 9, 2015 by Chen et al.
U.S. Appl. No. 14/848,645, filed Sep. 9, 2015 by Chen et al.
U.S. Appl. No. 60/169,502, filed Dec. 7, 2009 by Yeejang James Lin.
U.S. Appl. No. 60/182,812, filed Feb. 16, 2000 by Skene et al.
U.S. Appl. No. 09/459,815, filed Dec. 13, 1999 by Skene et al.
Notice of Allowance for U.S. Appl. No. 13/584,534 dated Dec. 16, 2015, 7 pages.
Delgadillo, “Cisco Distributed Director”, White Paper, 1999, at URL:http://www-europe.cisco.warp/public/751/distdir/dd—wp.htm, (19 pages) with Table of Contents for TeleCon (16 pages).
Cisco LocalDirector Version 1.6.3 Release Notes, Oct. 1997, Cisco Systems, Inc. Doc No. 78-3880-05.
“Foundry Networks Announces Application Aware Layer 7 Switching on ServerIron Platform,” (Mar. 1999).
Foundry ServerIron Installation and Configuration Guide (May 2000), Table of Contents—Chapter 1-5, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html.
Foundry ServerIron Installation and Configuration Guide (May 2000), Chapter 6-10, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html.
Foundry ServerIron Installation and Configuration Guide (May 2000), Chapter 11-Appendix C, http://web.archive.org/web/20000815085849/http://www.foundrynetworks.com/techdocs/SI/index.html.
Non-Final Office Action for U.S. Appl. No. 14/320,138 dated Feb. 2, 2016, 30 pages.
U.S. Appl. No. 14/927,478, filed Oct. 30, 2015 by Vedam et al.
U.S. Appl. No. 14/927,479, filed Oct. 30, 2015 by Sharma et al.
U.S. Appl. No. 14/927,482, filed Oct. 30, 2015 by Sharma et al.
U.S. Appl. No. 14/927,484, filed Oct. 30, 2015 by Sharma et al.
NGenius Subscriber Intelligence, http://www.netscout.com/uploads/2015/03NetScout—DS—Subscriber—Intelligence—SP.pdf, downloaded circa Mar. 23, 2015, pp. 1-6.
Xu et al.: Cellular Data Network Infrastructure Characterization and Implication on Mobile Content Placement, Sigmetrics '11 Proceedings of the ACM SIGMETRICS joint international conference on Measurement and modeling of computer systems, date Jun. 7-11, 2011, pp. 1-12, ISBN: 978-1-4503-0814-4 ACM New York, NY, USA copyright 2011.
E.H.T.B. Brands, Flow-Based Monitoring of GTP Trac in Cellular Networks, Date: Jul. 20, 2012, pp. 1-64, University of Twente, Enschede, The Netherlands.
Qosmos DeepFlow: Subscriber Analytics Use Case, http://www.qosmos.com/wp-content/uploads/2014/01/Qosmos-DeepFlow-Analytics-use-case-datasheet-Jan-2014.pdf, date Jan. 2014, pp. 1-2.
Configuring GTM to determine packet gateway health and availability, https://support.f5.com/kb/en-us/products/big-ip—gtm/manuals/product/gtm-implementations-11-6-0/9.html, downloaded circa Mar. 23, 2015, pp. 1-5.
ExtraHop-Arista Persistent Monitoring Architecture for SDN, downloaded circa Apr. 2, 2015, pp. 1-5.
7433 GTP Session Controller, www.ixia.com, downloaded circa Apr. 2, 2015, pp. 1-3.
Stateful GTP Correlation, https://www.gigamon.com/PDF/appnote/AN-GTP-Correlation-Stateful-Subscriber-Aware-Filtering-4025.pdf, date 2013, pp. 1-9.
GigaVUE-2404 // Data Sheet, www.gigamon.com, date Feb. 2014, pp. 1-6.
NGenius Performance Manager, www.netscout.com, date Mar. 2014, pp. 1-8.
GigaVUE-VM // Data Sheet, www.gigamon.com, date Oct. 2014, pp. 1-3.
Unified Visibility Fabric an Innovative Approach, https://www.gigamon.com/unified-visibility-fabric, Downloaded circa Mar. 30, 2015, pp. 1-4.
adaptiv.io and Apsalar Form Strategic Partnership to Provide Omni-channel Mobile Data Intelligence, http://www.businesswire.com/news/home/20150113005721/en/adaptiv.io-Apsalar-Form-Strategic-Partnership-Provide-Omni-channel, Downloaded circa Mar. 30, 2015, pp. 1-2.
Real-time Data Analytics with IBM InfoSphere Streams and Brocade MLXe Series Devices, www.brocade.com, date 2011, pp. 1-2.
Syniverse Proactive Roaming Data Analysis—VisProactive, http://m.syniverse.com/files/service—solutions/pdf/solutionsheet—visproactive—314.pdf.,date 2014, pp. 1-3.
Network Analytics: Product Overview, www.sandvine.com, date Apr. 28, 2014, pp. 1-2.
Non-Final Office Action for U.S. Appl. No. 15/043,421 dated Apr. 13, 2016, 18 pages.
U.S. Appl. No. 61/919,244, filed Dec. 20, 2013 by Chen et al.
U.S. Appl. No. 61/932,650, filed Jan. 28, 2014 by Munshi et al.
U.S. Appl. No. 61/994,693, filed May 16, 2014 by Munshi et al.
U.S. Appl. No. 62/088,434, filed Dec. 5, 2014 by Hsu et al.
U.S. Appl. No. 62/137,073, filed Mar. 23, 2015 by Chen et al.
U.S. Appl. No. 62/137,084, filed Mar. 23, 2015 by Chen et al.
U.S. Appl. No. 62/137,096, filed Mar. 23, 2015 by Laxman et al.
U.S. Appl. No. 62/137,106, filed Mar. 23, 2015 by Laxman et al.
PCT Patent Application No. PCT/US2015/012915 filed on Jan. 26, 2015 by Hsu et al.
U.S. Appl. No. 14/320,138, filed Jun. 30, 2014 by Chen et al.
Non-Final Office Action for U.S. Appl. No. 11/827,524 dated Dec. 10, 2009, 15 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 dated Jun. 2, 2010, 14 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 dated Nov. 26, 2010, 16 pages.
Final Office Action for U.S. Appl. No. 11/827,524 dated May 6, 2011, 19 pages.
Advisory Action for U.S. Appl. No. 11/827,524 dated Jul. 14, 2011, 5 pages.
Non-Final Office Action for U.S. Appl. No. 11/827,524 dated Oct. 18, 2012, 24 pages.
Notice of Allowance for U.S. Appl. No. 11/827,524 dated Jun. 25, 2013, 11 pages.
Non-Final Office Action for U.S. Appl. No. 14/030,782 dated Oct. 6, 2014, 14 pages.
IBM User Guide, Version 2.1AIX, Solaris and Windows NT, Third Edition (Mar. 1999) 102 Pages.
White Paper, Foundry Networks, “Server Load Balancing in Today's Web-Enabled Enterprises” Apr. 2002 10 Pages.
International Search Report & Written Opinion for PCT Application PCT/US2015/012915 dated Apr. 10, 2015, 15 pages.
Gigamon: Vistapointe Technology Solution Brief; Visualize-Optimize-Monetize-3100-02; Feb. 2014; 2 pages.
Gigamon: Netflow Generation Feature Brief; 3099-04; Oct. 2014; 2 pages.
Gigamon: Unified Visibility Fabric Solution Brief; 3018-03; Jan. 2015; 4 pages.
Gigamon: Active Visibility for Multi-Tiered Security Solutions Overview; 3127-02; Oct. 2014; 5 pages.
Gigamon: Enabling Network Monitoring at 40Gbps and 100Gbps with Flow Mapping Technology White Paper; 2012; 4 pages.
Gigamon: Enterprise System Reference Architecture for the Visibility Fabric White Paper; 5005-03; Oct. 2014; 13 pages.
Gigamon: Gigamon Intelligent Flow Mapping White Paper; 3039-02; Aug. 2013; 7 pages.
Gigamon: Maintaining 3G and 4G LTE Quality of Service White Paper; 2012; 4 pages.
Gigamon: Monitoring, Managing, and Securing SDN Deployments White Paper; 3106-01; May 2014; 7 pages.
Gigamon: Service Provider System Reference Architecture for the Visibility Fabric White Paper; 5004-01; Mar. 2014; 11 pages.
Gigamon: Unified Visibility Fabric—A New Approach to Visibility White Paper; 3072-04; Jan. 2015; 6 pages.
Gigamon: The Visibility Fabric Architecture—A New Approach to Traffic Visibility White Paper; 2012-2013; 8 pages.
Ixia: Creating a Visibility Architecture—a New Perspective on Network Visibilty White Paper; 915-6581-01 Rev. A, Feb. 2014; 14 pages.
Gigamon: Unified Visibility Fabric; https://www.gigamon.com/unfied-visibility-fabric; Apr. 7, 2015; 5 pages.
Gigamon: Application Note Stateful GTP Correlation; 4025-02; Dec. 2013; 9 pages.
Brocade and IBM Real-Time Network Analysis Solution; 2011 Brocade Communications Systems, Inc.; 2 pages.
Ixia Anue GTP Session Controller; Solution Brief; 915-6606-01 Rev. A, Sep. 2013; 2 pages.
Netscout; Comprehensive Core-to-Access IP Session Analysis for GPRS and UMTS Networks; Technical Brief; Jul. 16, 2010; 6 pages.
Netscout: nGenius Subscriber Intelligence; Data Sheet; SPDS—001-12; 2012; 6 pages.
Gigamon: Visibility Fabric Architecture Solution Brief; 2012-2013; 2 pages.
Gigamon: Visibility Fabric; More than Tap and Aggregation.bmp; 2014; 1 page.
Ntop: Monitoring Mobile Networks (2G, 3G and LTE) using nProbe; http://www.ntop.org/nprobe/monitoring-mobile-networks-2g-3g-and-lte-using-nprobe; Apr. 2, 2015; 4 pages.
Gigamon: GigaVUE-HB1 Data Sheet; 4011-07; Oct. 2014; 4 pages.
Brocade IP Network Leadership Technology; Enabling Non-Stop Networking for Stackable Switches with Hitless Failover; 2010; 3 pages.
U.S. Appl. No. 60/998,410, filed Oct. 9, 2007 by Wang et al.
Non-Final Office Action for U.S. Appl. No. 13/584,534 dated Oct. 24, 2014, 24 pages.
Restriction Requirement for U.S. Appl. No. 13/584,534 dated Jul. 21, 2014, 5 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 dated Jul. 6, 2009, 28 pages.
Final Office Action for U.S. Appl. No. 11/937,285 dated Mar. 3, 2010, 28 pages.
U.S. Appl. No. 15/205,889, filed Jul. 8, 2016 by Hegde et al.
U.S. Appl. No. 15/206,008, filed Jul. 8, 2016 by Hegde et al.
U.S. Appl. No. 14/603,304, NonFinal Office Action dated Aug. 1, 2016, 86 pages.
U.S. Appl. No. 14/320,138, Notice of Allowance dated Sep. 23, 2016, 17 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 dated Aug. 17, 2010, 28 pages.
Final Office Action for U.S. Appl. No. 11/937,285 dated Jan. 20, 2011, 41 pages.
Final Office Action for U.S. Appl. No. 11/937,285 dated May 20, 2011, 37 pages.
Non-Final Office Action for U.S. Appl. No. 11/937,285 dated Nov. 28, 2011, 40 pages.
Notice of Allowance for U.S. Appl. No. 11/937,285 dated Jun. 5, 2012, 10 pages.
Gigamon: Adaptive Packet Filtering; Feature Brief; 3098-03 Apr. 2015; 3 pages.
Final Office Action for U.S. Appl. No. 14/030,782 dated Jul. 29, 2015, 14 pages.
Final Office Action for U.S. Appl. No. 13/584,534 dated Jun. 25, 2015, 21 pages.
U.S. Appl. No. 14/603,304, filed Jan. 22, 2015, by Hsu et al.
U.S. Appl. No. 14/848,586, filed Sep. 8, 2015, by Chen et al.
U.S. Appl. No. 14/848,645, filed Sep. 8, 2015, by Chen et al.
Notice of Allowance for U.S. Appl. No. 13/584,534 dated Jan. 6, 2016, 4 pages.
U.S. Appl. No. 12/272,618, Final Office Action dated May 5, 2014, 13 pages.
U.S. Appl. No. 12/272,618, NonFinal Office Action dated Jul. 29, 2013, 13 pages.
U.S. Appl. No. 12/272,618, NonFinal Office Action dated Jan. 12, 2015, 5 pages.
U.S. Appl. No. 12/272,618, Notice of Allowance dated Aug. 26, 2015, 11 pages.
U.S. Appl. No. 12/272,618, Final Office Action dated Feb. 28, 2012, 12 pages.
U.S. Appl. No. 13/925,670, NonFinal Office Action dated Nov. 16, 2015, 48 pages.
U.S. Appl. No. 14/230,590, Notice of Allowance dated Sep. 23, 2015, 8 pages.
U.S. Appl. No. 15/043,421, Notice of Allowance dated Jun. 27, 2016, 21 pages.
69—International Search Report & Written Opinion for PCT Application PCT/US2017/025998 dated Jul. 20, 2017, 8 pages.
70—Ixia & Vectra, Complete Visibility for a Stronger Advanced Persistent Threat (APT) Defense, pp. 1-2, May 30, 2016.
71—Extended European Search Report & Opinion for EP Application 17000212.5 dated Aug. 1, 2017, 9 pages.
72—Extended European Search Report & Opinion for EP Application 17000213.3 dated Aug. 1, 2017, 7 pages.
U.S. Appl. No. 15/466,732, filed Mar. 22, 2017 by Hegde et al.
U.S. Appl. No. 15/467,766, filed Mar. 23, 2017 by Nagaraj et al.
U.S. Appl. No. 15/425,777, filed Feb. 6, 2017, by Chen et al.
Joshi et al.: A Review of Network Traffic Analysis and Prediction Techniques; arxiv.org; 2015; 22 pages.
Anjali et al.: MABE: A New Method for Available Bandwidth Estimation in an MPLS Network; submitted to World Scientific on Jun. 5, 2002; 12 pages.
Cisco Nexus Data Broker: Scalable and Cost-Effective Solution for Network Traffic Visibility; Cisco 2015; 10 pages.
VB220-240G Modular 10G/1G Network Packet Broker; VSS Monitoring; 2016, 3 pages.
Big Tap Monitoring Fabric 4.5; Big Switch Networks; Apr. 2015; 8 pages.
Gigamon Intelligent Flow Mapping—Whitepaper; 3039-04; Apr. 2015; 5 pages.
Ixia White Paper; The Real Secret to Securing Your Network; Oct. 2014; 16 pages.
Accedian—Solution Brief; FlowBROKER; Feb. 2016; 9 pages.
Network Time Machine for Service Providers; NETSCOUT; http://enterprise.netscout.com/telecom-tools/lte-solutions/network-time-machine-service-providers; Apr. 18, 2017; 8 pages.
Arista EOS Central—Introduction to TAP aggregation; https://eos.arista.com/introduction-to-tap-aggregation/; Apr. 18, 2017; 6 pages.
Brocade Session Director—Data Sheet; 2016; https://www.brocade.com/content/dam/common/documents/content-types/datasheet/brocade-session-director-ds.pdf; 5 pages.
Ixia—Evaluating Inline Security Fabric: Key Considerations; White Paper; https://www.ixiacom.com/sites/default/files/2016-08/915-8079-01-S-WP-Evaluating%20Inline%20Security%20Fabric—v5.pdf; 10 pages.
Next-Generation Monitoring Fabrics for Mobile Networks; Big Switch Networks—White Paper; 2014; 9 pages.
Gigamon Adaptive Packet Filtering; Jan. 25, 2017; 3 pages.
VB220 Modular 10G.1G Network Packet Broker Datasheet; VSS Monitoring; 2016; 8 pages.
FlexaWare; FlexaMiner Packet Filter FM800PF; Jan. 27, 2017; 5 pages.
GL Communications Inc.; PacketBroker—Passive Ethernet Tap; Jan. 27, 2017; 2 pages.
U.S. Appl. No. 15/336,333, filed Oct. 27, 2016 by Vedam et al.
U.S. Appl. No. 14/603,304, Notice of Allowance dated Jan. 11, 2017, 13 pages.
Krishnan et al.: “Mechanisms for Optimizing LAG/ECMP Component Link Utilization in Networks”, Oct. 7, 2014, 27 pages, https://tools.ietf.org/html/draft-ietf-opsawg-large-flow-load-balancing-15.
U.S. Appl. No. 14/927,484, NonFinal Office Action dated Aug. 9, 2017, 77 pages.
Related Publications (1)
Number Date Country
20160285763 A1 Sep 2016 US
Provisional Applications (1)
Number Date Country
62137106 Mar 2015 US