Logical network traffic analysis

Information

  • Patent Grant
  • 10469342
  • Patent Number
    10,469,342
  • Date Filed
    Wednesday, December 31, 2014
    9 years ago
  • Date Issued
    Tuesday, November 5, 2019
    4 years ago
Abstract
Some embodiments of the invention provide a method for gathering data for logical network traffic analysis by sampling flows of packets forwarded through a logical network. Some embodiments are implemented by a set of network virtualization controllers that, on a shared physical infrastructure, can implement two or more sets of logical forwarding elements that define two or more logical networks. In some embodiments, the method (1) defines an identifier for a logical network probe, (2) associates this identifier with one or more logical observation points in the logical network, and (3) distributes logical probe configuration data, including sample-action flow entry data, to one or more managed forwarding elements that implement the logical processing pipeline at the logical observation points associated with the logical network probe identifier. In some embodiments, the sample-action flow entry data specify the packet flows that the forwarding elements should sample and the percentage of packets within these flows that the forwarding elements should sample.
Description
BACKGROUND

Network virtualization provides network abstractions (network equipment, network services, etc.) that provide the same services and have the same behavior as network hardware equipment, but is independent from the physical implementation of those network abstractions. For instance, logical networks may provide abstractions such as logical L2 switches, logical L3 routers, logical DHCP servers, etc. that provide the same services and have the same behavior as their physical counterparts from the viewpoint of clients connected to those abstractions.


Typical implementations of logical networks rely on network overlays, i.e. sets of tunnels that forward the packets forwarded through logical network over a fabric of physical networking equipment. Using network overlays or other techniques, logical network abstractions are decoupled from the physical hardware, e.g. logical L2 switches are typically not tied to the physical L2 switches in the fabric, and logical L3 routers are typically not tied to physical L3 routers in the fabric.


The decoupling between logical and physical network equipment allows for more efficient and flexible management. Logical network abstractions can be managed by software without requiring managing the physical equipment comprising the fabric. One advantage of this decoupling for management is the potential to perform monitoring in a more flexible way than in physical networks. A logical network's whole topology is typically known and managed by a logical network management system from a centralized point, and the connections between abstractions is easily managed in software. This allows for both more fine-grained control over monitoring, e.g. at the scale of individual packet forwarding rules in logical abstractions, and large-scale monitoring, e.g. at the scale of a whole logical network.


SUMMARY

Some embodiments of the invention provide a method for performing logical network traffic analysis by sampling packets forwarded through a logical network. Some embodiments are implemented by a set of network virtualization controllers that, on a shared physical infrastructure, can implement two or more sets of logical forwarding elements that define two or more logical networks.


In some embodiments, the method defines an identifier for a logical network probe. The method then associates the logical network probe to one or more logical observation points in the logical network. In some embodiments, the logical observation points can be any ingress or egress port of a logical forwarding element (e.g., a logical switch or logical router), or can be at any decision making point in the logical processing pipeline (e.g., at a firewall rule resolution or network address translation point in the logical processing pipeline) for processing packets received from a data compute node (e.g., a virtual machine, computer, etc.).


The method then generates data for a sample-action flow entry in the logical processing pipeline associated with each logical observation point. The method distributes the sample-action flow entry data to a set of managed forwarding elements (e.g., hardware forwarding elements or a software forwarding element that executes on a host computing device with one or more virtual machines) that are for processing data packets associated with the set of logical observation points. The set of managed forwarding elements are part of a group of managed forwarding elements that implement the logical network. Each managed forwarding element in the set uses a sample-action flow entry to identify the logical-network packets that it needs to sample. In some embodiments, the distributed sample-action flow entry data includes a set of sample-action flow entries. In other embodiments, the distributed sample-action flow entry data is data that allows the set of managed forwarding elements to produce sample-action flow entries. In some embodiments, each sample-action flow entry has the logical network probe identifier and a set of matching criteria, and the distributed sample-action flow entry data includes this information.


In some embodiments, a logical observation point can correspond to one or more physical observation points in one or more managed forwarding elements. The method distributes the sample-action flow entry data to each managed forwarding element that is supposed to process the flow entry for packets at the physical observation point. The sample-action flow entry causes the forwarding element to sample packets at the observation points based on a set of matching criteria. The matching criteria set of each sample-action flow entry defines the types of packet flows that are candidates for sampling for the logical network probing. However, not all the packets that satisfy the matching criteria set will have to be sampled in some embodiments. This is because in some embodiments, the matching criteria includes a user-definable sampling percentage criteria, which causes the forwarding element to only sample a certain percentage of the packets that satisfy the other matching criteria.


Once a managed forwarding element determines that a packet matches a sample-action flow entry and should be sampled, the managed forwarding element samples the packet and sends the sampled data to a location. In some embodiments, the location is a daemon (e.g., a control plane daemon or data plane daemon) of the managed forwarding element (MFE). This daemon forwards the sampled data to a set of data collectors (e.g., one or more servers) that analyze the sampled packet data for the logical network. In some embodiments, this daemon gathers sample data for one or more logical observation points on the forwarding element, and forwards the collected data periodically to the data collector set, while in other embodiments the daemon forwards the received data in real time to the data collector set. In some of the embodiments in which the daemon gathers the sample data, the daemon produces flow analysis data for each flow that is sampled, and sends this flow analysis data to the data collector set. For instance, in some embodiments, the daemon periodically sends the data collector set the following data tuple for each sampled flow: flow's identifier data (e.g., flow's five tuple), packet count, byte count, start time stamp, and end time stamp.


In other embodiments, the MFE directly sends the sample data to the data collector set after sampling this data. In some embodiments, some sample-action flow entries send the sample-packet data to the MFE daemon, while other sample-action flow entries send the sample-packet date directly to the data collector set. In some embodiments, the location that is to receive the sampled data is specified in the sample-action flow entry, while in other embodiments, this location is part of the forwarding elements configuration for the logical network probe.


When the forwarding element forwards the sample-packet data to the location specified in the sample-action flow entry, the forwarding element also forwards the logical network probe identifier that is specified in the sample-action flow entry. The forwarding-element daemons and the data collectors use the logical network probe identifiers to analyze the sample-packet data that they receive and generate analysis data that summarize the sample data that they receive. For instance, the logical network probe identifier can be used by collectors to differentiate the monitoring statistics received from a single forwarding element for multiple probes defined on the forwarding element, and to aggregate the statistics received from multiple forwarding elements for a single probe instantiated on multiple forwarding element.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all-inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawings, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.





BRIEF DESCRIPTION OF DRAWINGS

The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 illustrates the network virtualization system of some embodiments.



FIG. 2 illustrates an example of a logical network that has its logical router and two logical switches distributed to two different host computing devices that execute the VMs of the logical network.



FIG. 3 illustrates an example of a set of logical observation points that are associated with a logical network probe.



FIG. 4 illustrates a process that a controller performs to configure a logical network probe.



FIG. 5 illustrates an example of how the sample-action flow entries that are distributed get implemented in some embodiments.



FIG. 6 illustrates a process that some embodiments perform when a logical network probe is destroyed.



FIG. 7 illustrates a process that some embodiments perform when a logical network probe is reconfigured.



FIG. 8 presents a process that conceptually illustrates the operation that a managed forwarding element performs to process a sample-action flow entry.



FIG. 9 conceptually illustrates a software-switching element of some embodiments that is implemented in a host computing device.



FIG. 10 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.





DETAILED DESCRIPTION

In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.


Some embodiments of the invention provide a method for gathering data for logical network traffic analysis (e.g., for conformance testing, accounting management, security management, etc.) by sampling flows of packets forwarded through a logical network. Some embodiments are implemented by a set of network virtualization controllers that, on a shared physical infrastructure, can implement two or more sets of logical forwarding elements that define two or more logical networks. The physical infrastructure includes several physical forwarding elements (e.g., hardware or software switches and routers, etc.) that are managed by the network virtualization controllers to implement the logical forwarding elements. These forwarding elements are referred to as managed forwarding elements (MFEs).


In some embodiments, the method (1) defines an identifier for a logical network probe, (2) associates this identifier with one or more logical observation points in the logical network, and (3) distributes logical probe configuration data, including sample-action flow entry data, to one or more managed forwarding elements that implement the logical processing pipeline at the logical observation points associated with the logical network probe identifier. In some embodiments, the sample-action flow entry data specify the packet flows that the forwarding elements should sample and the percentage of packets within these flows that the forwarding elements should sample.


Also, in some embodiments, the sample-action flow entry data and/or logical probe configuration data includes the locations to which the forwarding elements should forward the sampled data and the type of data to include in the sampled data. The data that is included in returned sample data includes a portion or the entire sampled packet and includes the logical network probe identifier, in order to allow the supplied data to be aggregated and analyzed. In this document, the term “packet” is used to refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term packet may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, TCP segments, UDP datagrams, IP packets, etc.


Before describing the collection of logical network packet sampling through the sample-action flow entries of some embodiments, the network virtualization system of some embodiments will be first described by reference to FIG. 1. This figure illustrates a system 100 in which some embodiments of the invention are implemented. This system includes a set of network virtualization controllers 115 (also called controller nodes below) that flexibly implement the logical network probing method of some embodiments through the logical network management software that they execute, without requiring the configuration of the individual network equipment or using dedicated network equipment. The controller nodes 115 provide services such as coordination, configuration storage, programming interface, etc. In some of these embodiments, the controllers 115 implement two or more logical networks over a shared physical infrastructure. In some embodiments, the controllers 115 decouple the logical network abstractions from the physical network equipment, by implementing the logical networks through network tunnel overlays (e.g., through GRE tunnels, MPLS tunnels, etc.).


As shown in FIG. 1, the system 100 also includes one or more data collectors 175 that collect logical network sample data. As further shown, the controller nodes and the data collectors connect to several transport nodes 110 through an IP network 105 that includes the shared physical infrastructure. The transport nodes 110 provide logical network ports (e.g. Ethernet (L2) ports) that can be either hardware ports 120 connected to physical equipment (e.g., computers or physical forwarding elements), or software ports 125 (sometimes called virtual ports) connected to virtual machines 130 executing on the transport nodes. In other words, transport nodes in some embodiments include (1) host computing devices 110a on which software forwarding elements 150 execute, and (2) standalone hardware forwarding elements 110b. The software and hardware forwarding elements are managed by the controller nodes 115 to implement two or more logical networks. Hence, these forwarding elements are referred to below as managed forwarding elements (MFEs).


In some embodiments, the transport and control nodes can communicate through the IP protocol over the physical interconnection network 105, which is used to transport both application data packets between transport nodes 110, and control service data from the controller nodes 115. In some embodiments, the application data packets are transmitted between transport nodes over a tunneling protocol (e.g., GRE or MPLS), with those tunnels forming an overlay network that interconnects the transport nodes over the IP network.


Physical and logical equipment connected to the transport nodes' logical ports are given the illusion that they are interacting with physical network equipment, such as L2 switches and L3 routers, as specified in the logical network configuration. The logical network configuration includes the logical network equipment (e.g. logical L2 switches and logical L3 routers), their configuration, and their logical interconnection. In some embodiments, the logical network is configured by users via an API provided by the control nodes 115.


The logical network configuration specifies the optional mapping of each logical port to a physical port (e.g., hardware or software port) on a transport node, or the logical connection to another logical port with a logical link. Such mappings are arbitrary, i.e. the logical network topology, the physical locations of concrete ports, and the physical network topology are completely independent.


In addition, the instantiation of a logical network entity can be distributed by the logical networking system to multiple transport nodes, i.e., a logical network entity may not be located on a single node. For instance, the instantiation of a logical switch can be distributed to all the transport nodes where its logical ports are mapped to virtual machine interfaces or physical interfaces. The instantiation of a logical router can be distributed to all the transport nodes where all the logical switches it is logically connected to are instantiated. Likewise, the instantiation of a logical router port can be distributed to all the transport nodes where the logical switch it is connected to is instantiated. FIG. 2 illustrates an example of a logical network 200 that has its logical router 205 and logical switches 210 and 215 distributed to two different host computing devices 220 that execute the VMs of the logical network.


In some embodiments, logical forwarding elements (also called logical network entities) are implemented on a transport node by programming one or more managed forwarding elements (e.g., a programmable switch) on the transport node. The MFE on transport nodes can be implemented in hardware, if the transport node is a physical network equipment, or as a software forwarding element (e.g., an OVS (Open vSwitch) switch) that executes on a host computing device with one or more virtual machines.


In some embodiments, the managed forwarding element is programmed using an MFE programming interface, which can be either an API or a protocol such as OpenFlow. In some embodiments, the MFE interface allows the logical network system's control plane to program flow tables that specify, for each flow: (1) a set of matching criteria, which specifies the pattern of packets matching that flow in terms of elements of packet headers at Layers 2 to 7, and (2) actions to be performed on packets in the flow. Possible actions include outputting the packets to a specified port, and modifying packet header elements.


In some embodiments, the logical network configuration is translated into flow tables in all transport node MFEs to perform the packet processing equivalent to that of the physical network equipment that are simulated by the logical network's topology. The MFE interface also supports configuring the MFE for exporting monitoring information. In some embodiments, this is implemented by using the OVSDB protocol.


On one hand, the distributed nature of network virtualization (i.e., the instantiation of the logical network entities on many transport nodes) makes monitoring more challenging, as many transport nodes may have to be monitored to monitor a single logical network entity. On the other hand, the centralized control of the whole logical network, and the precise centralized control of each managed forwarding element (through the controller nodes 115) on each transport node, provides more control and visibility into the network than physical networks, which are decentralized by their very nature.


As mentioned above, some embodiments allow a user to define one or more logical network probes to gather logical network traffic for monitoring. The logical network configuration in some embodiments includes logical network probes. In some embodiments, the logical network probes are translated into sample-action flow entries for the flow tables of the managed forwarding elements that are associated with the logical observation points that are connected to the logical network probe. As mentioned above, and further described below, sample-action flow entries cause the managed forwarding elements to sample logical-network packets, so that the sampled logical-network data can be relayed to one or more data collectors 175.


In some embodiments, logical network probes are configured by users via the control nodes' API to observe and monitor packets at any set of logical observation points in the logical network. Any number of logical network probes can be created, reconfigured, and destroyed by a user at any time. In some embodiments, each logical network probe has a unique identifier in the whole logical networking system. A single logical network entity can be probed by one or more logical network probes, each with a different configuration. Also, in some embodiments, different logical network entities can be configured and probed at different observation points. Examples of such logical network entities, and their different observation points, in some embodiments include:

    • an ingress logical port;
    • an egress logical port;
    • all ingress ports of a logical switch;
    • all egress ports of a logical switch;
    • all ingress ports of a logical router;
    • all egress ports of a logical router;
    • all ingress ports of all logical switches;
    • all egress ports of all logical switches;
    • all ingress ports of all logical routers;
    • all egress ports of all logical routers;
    • all ingress ports of all logical equipment;
    • all egress ports of all logical equipment;
    • a NAT rule at ingress;
    • a NAT rule at egress;
    • a firewall rule at ingress;
    • traffic accepted by a logical firewall rule;
    • traffic dropped by a logical firewall rule;
    • traffic rejected by a logical firewall rule;
    • a logical firewall at ingress;
    • a logical firewall at egress;
    • traffic accepted by a logical firewall;
    • traffic dropped by a logical firewall;
    • traffic rejected by a logical firewall;



FIG. 3 illustrates an example of a set of logical observation points that are associated with a logical network probe 305. In this example, the observation points are the ports of logical switch 210 that connect to VM1, VM2 and VM3, and the port of the logical router 205 that connects to the logical switch 215. As shown, sample-action flow entry data is defined and pushed to the computing devices 220 for these ports. In some embodiments, the distributed sample-action flow entry data includes a set of sample-action flow entries. In other embodiments, the distributed sample-action flow entry data is data that allows the set of managed forwarding elements to produce sample-action flow entries. In some embodiments, each sample-action flow entry has the logical network probe identifier and a set of matching criteria, and the distributed sample-action flow entry data includes this information.


In the example illustrated in FIG. 3, the sample flow entries cause the software forwarding elements that execute on the computing devices 220 to sample the packets that match the flow matching criteria and pass through these ports (i.e., the ports of the logical switch 210, and the logical router port that connects to logical switch 215) at the desired sampling percentage. In some embodiments, this sample data is forwarded directly to the set of data collectors 175 with the logical network probe's identifier. In other embodiments, this sample data is first aggregated and analyzed by a process executing on the forwarding element or the host, before this process supplies analysis data regarding this sample data to the data collector set 175. Once gathered at the data collector set, the data can be aggregated and analyzed in order to produce data, graphs, reports, and alerts regarding the specified logical network probe.


In some embodiments, one managed forwarding element that executes on a host computing device can implement multiple different logical forwarding elements of the same or different types. For instance, for the example illustrated in FIG. 3, one software switch in some embodiments executes on each host, and this software switch can be configured to implement the logical switches and the logical routers of one or more logical networks. In other embodiments, one managed forwarding element that executes on a host computing device can implement multiple different logical forwarding elements but all the MFEs have to be of the same type. For instance, for the example illustrated in FIG. 3, two software forwarding elements execute on each host, one software forwarding element can be configured to implement the logical switches of one or more logical networks, while the other software forwarding element can be configured to implement the logical routers of one or more logical networks.


In some embodiment, the managed forwarding elements at logical observation points can be configured to export monitoring statistics about observed flows to flow statistics collectors connected to the physical network or the logical network, using protocols such as sFlow, IPFIX, and NetFlow. Also, in some embodiment, the managed forwarding elements at logical observation points can be configured to export sampled packets to collectors connected to the physical network or the logical network, using protocols such as sFlow and PSAMP/IPFIX. In some embodiments, a logical network probe performs monitoring through statistical sampling of packets at a logical observation point.


In some embodiments, the configuration of each logical network probe (via the control nodes' API) includes (1) set of logical network entities to probe, (2) protocol to use to interact with collectors, (3) collectors' network addresses and transport ports, (4) packet sampling probability, and (5) protocol-specific details (e.g. for IPFIX, active timeout, idle timeout, flow cache size, etc.).


To support configuring the flow monitoring statistics export for each logical network probe instantiated on a transport node, the transport nodes' MFE configuration interface is extended in some embodiments to include (1) a protocol to use to interact with collectors, (2) collectors' network addresses and transport ports, and (3) protocol-specific details (e.g. for IPFIX protocol, active timeout, idle timeout, flow cache size, etc.). As mentioned above, each logical network probe configuration on the switch is identified with the probe's unique identifier.


As mentioned above, a new type of action, called sample action, is added to the interface offered by the transport nodes' MFE, in order to sample logical network flow data at logical observation points in the network that are associated with the user-defined logical network probes. In some embodiments, each sample action flow entry contains three sets of parameters, which are (1) one or more matching criteria that specify the flows that match the sample action, (2) a packet sampling probability that specifies a percentage of packets that should be sampled in the matching flow, and (3) a logical network probe identifier. In some embodiments, the sample action flow entry also includes (1) the type of data to sample, and (2) the location to send the sampled data. In other embodiments, the type of data to sample and the location for sending the sample data are defined by the logical network configuration or some other configuration of the managed forwarding element.


A logical network probe can sample an arbitrary number of logical entities. In some embodiments, a logical network probe is instantiated by probe configurations and sample actions inserted into flows on many transport nodes. Each of the transport nodes instantiating a logical network probe sends monitoring traffic directly to the configured collectors. A logical network probe's unique identifier is sent in the monitoring protocol messages to collectors, in a way specific to each protocol. This identifier can be used by collectors to differentiate the monitoring statistics received from a single transport node for multiple probes instantiated on the node, and to aggregate the statistics received from multiple transport nodes for a single probe instantiated on multiple nodes. In an embodiment, the IPFIX protocol is used to send to collectors flow records containing the probe's unique identifier in the standard “observation point ID” element.



FIG. 4 illustrates a process 400 that a controller 115 performs to configure a logical network probe. This process is performed when a logical network probe is created by a user via the control nodes' API. As shown, the process initially (at 405) allocates a new unique identifier for the network probe that is defined by the user. Next, at 410, the process adds a new logical network probe configuration to the control plane of each transport node MFE that implements the logical network entities that have the logical observation points that are associated with the defined logical network probe. In some embodiments, the process adds the logical network probe configuration through each affected transport node's MFE configuration interface.


To add (at 410) the logical network probe configuration to each affected MFE (i.e., each transport node MFE that implements the logical network entities that have the logical observation points that are associated with the defined logical network probe), the process has to identify the transport node MFEs that include the logical observation points that are associated with the defined logical network probe. In some embodiments, the logical observation points can be any ingress or egress port of a logical forwarding element (e.g., a logical switch or logical router), or can be at any decision making point in the logical processing pipeline (e.g., at a firewall rule resolution or network address translation point in the logical processing pipeline) for processing packets received from a data compute node (e.g., a virtual machine, computer, etc.). Also, one logical observation point might be exist as multiple points in one or more transport nodes. For instance, the logical port between the logical router 205 and the logical switch 215 in FIG. 3 appears as two points on the two host computing devices 220.


At 415, the process 400 modifies flow entries that implement each probed logical network entity to add sample actions, which contain the probe's sampling probability and unique identifier. The process sends (at 420) modified flow entry data to the managed forwarding elements of each transport node that executes or includes the logical network entities that have the logical observation points that are associated with the defined logical network probe. The specific flows that are modified to include sample actions are specific to each logical network entity type, and the specific implementation of the logical networking system. The flows implementing a logical network entity can contain multiple sample actions, and possible actions with different logical network probe identifiers if the entity is probed by multiple probes.


In some embodiments, the distributed sample-action flow entry data includes a set of sample-action flow entries. In other embodiments, the distributed sample-action flow entry data includes data that allows the set of managed forwarding elements to produce sample-action flow entries. In some embodiments, each sample-action flow entry has the logical network probe identifier and a set of matching criteria, and the distributed sample-action flow entry data includes this information. In some embodiments, the network controllers distribute the sample-action flow entries through control channel communication with the MFEs that implement the logical network entities that are associated with the logical observation points of a logical network probe. One or more control plane modules of the MFEs then use the control channel data that the MFEs receive to produce data plane representation of the flow entries, which these modules then push into the data plane of the MFEs, as further described below by reference to FIG. 9. After 420, the process 400 ends.



FIG. 5 illustrates an example of how the sample-action flow entries that are distributed get implemented in some embodiments. Specifically, for the logical probe of FIG. 3, FIG. 5 illustrates the location where the sample-action flow entries are inserted in the flow tables of the software forwarding elements that execute on the hosts 220a and 220b. As shown in FIG. 5, each logical switch includes an L2 ingress ACL table, an L2 logical forwarding table, and an L2 egress ACL table, while the logical router includes an L3 ingress ACL table, an L3 logical forwarding table, and an L3 egress ACL table. Each of these sets of tables are processed by one managed forwarding element on each host 220 in some embodiments, while each of these sets of tables are processed by two managed forwarding elements on each host 220 (one for the processing the logical switching operations, and another for processing the logical routing operations) in other embodiments.


As further shown in FIG. 5, the logical probe 305 in some embodiments is implemented by inserting sample-action flow entries in the L2 logical forwarding table of the logical switch 210 on both devices 220a and 220b. However, on the device 220a, the sample-action flow entries are specified in terms of the forwarding element ports that connect to VM1 and VM2, while on the device 220b, the sample-action flow entry is specified in terms of the forwarding element port that connects to VM3. On both devices, the L3 logical forwarding table of the logical router 205 has a sample-action flow entry for the logical probe 305 and this sample-action flow entry is specified in terms of the logical router's logical port that connects to the logical switch 215.


In some embodiments, a flow entry (including a sample action flow entry) is specified in terms of a set of matching criteria and an action. To process a flow entry for a packet, the forwarding element compares the packet's attribute set (e.g., header values) against the flow entry's matching criteria set, and performs the flow entry's action when the packet's attribute set matches the flow entry's matching criteria set. When the logical processing pipeline has multiple flow entry tables (such as ingress/egress ACL tables, and L2/L3 tables), and each of these tables has multiple flow entries, the forwarding element may examine multiple flow entries on multiple table to process a packet in some embodiments.



FIG. 6 illustrates a process 600 that some embodiments perform when a logical network probe is destroyed (e.g., via user input that is received through the control nodes' API). As shown, the process initially removes (at 605) the logical network probe's configuration from the control plane of all transport nodes that instantiate the logical network entities probed by this probe. In some embodiments, the configuration is removed via each affected transport node's MFE configuration interface. Next, the process modifies (at T10) the flow entries that implement each probed logical network entity to remove any sample action containing the probe's unique identifier. To do this, the process in some embodiments directs each affected MFE, through the MFE's control plane interface, to remove sample actions flow entries that contain the probe's identifier. After 610, the process ends.



FIG. 7 illustrates a process 700 that some embodiments perform when a logical network probe is reconfigured (e.g., via user input that is received through the control nodes' API). As shown, the process initially determines (at 705) whether the logical network probe reconfiguration adds any new logical network entity to the set of entities probed by the logical network probe. If not, the process transitions to 715, which will be described below. Otherwise, the process (at 710) sends the logical network probe configuration to any MFE that implements the newly added logical network entity, if the MFE did not already have the logical network probe configuration by virtue of implementing another logical network entity associated with the logical network probe. As mentioned above, the probe's configuration is added to an MFE through control plane communication in some embodiments. At 710, the process also directs the MFEs that implement any newly added logical network entity with the logical observation points of the logical network probe, to modify their flow entries to include sample action flow entries. After 710, the process transitions to 715.


At 715, the process determines whether the logical network probe reconfiguration removes any logical network entity removed from the set of probed entities. If not, the process transitions to 725, which will be described below. Otherwise, the process transitions to 720, where it removes the probe's configuration from all MFEs on transport nodes where no entities are probed anymore. The probe's configuration is removed from an MFE through control plane communication in some embodiments. The probe's configuration remains on MFEs on transport nodes where probed entities are still instantiated. After 720, the process transitions to 725.


At 725, the process determines whether the logical network probe reconfiguration modifies any of the probe's configuration excluding the probe's sampling probability. If not, the process transitions to 735, which will be described below. Otherwise, the process updates (at 730) the probe's configuration on all MFEs on all transport nodes where probed entities are instantiated. The probe's configuration is updated on an MFE through control plane communication in some embodiments. After 730, the process transitions to 735.


At 735, the process determines whether the logical network probe reconfiguration modifies the probe's sampling probability. If not, the process ends. Otherwise, the process directs (at 740) the MFEs implementing each probed logical network entities to modify all sample actions containing the probe's identifier, to contain the probe's new sampling probability. The sampling probability is adjusted in some embodiments through control plane communication with the MFEs that implement the logical network probe. In some embodiments, the control channel communication provides new sample-action flow entries with new sampling probabilities, while in other embodiments, the control channel communication provides instructions to the MFE to modify the probabilities in the sample-action flow entries that the MFE already stores. After 740, the process ends.


As mentioned above, the network controller set distributes sample-action flow entry data for a logical probe to the forwarding element that is supposed to process the sample-action flow entry for one of the logical probe's logical observation point. The sample-action flow entry causes the forwarding element to sample packets at the logical observation points based on a set of matching criteria and based on a sampling probability. To illustrate this, FIG. 8 presents a process 800 that conceptually illustrates the operation that a managed forwarding element performs to process a sample-action flow entry.


As shown, this process initially determines (at 805) whether a packet matches a sample-action flow entry, by determining whether the packet header attributes (e.g., based on L2 or L3 packet header attributes) matches the flow entry's set of matching criteria. The matching criteria set of each sample-action flow entry defines the types of packet flows that are candidates for sampling for the logical network probing. If the packet's header attributes do not match the flow entry's matching criteria set, the process ends.


Otherwise, the process determines (at 810) whether it should sample the packet because, in some embodiments, not all the packets that satisfy the matching criteria set will have to be sampled. As mentioned above, the matching criteria set includes a user-definable sampling percentage criteria, which causes the forwarding element to only sample a certain percentage of the packets that satisfy the other matching criteria. To resolve the decision at 810, the process randomly selects a number between 0 and 100 (inclusive of 0 and 100), and determines whether this number is equal to or less than the sampling percentage. If so, the process determines that it should sample the packet. Otherwise, the process determines that it should not sample the packet.


When the process determines (at 810) that it should not sample the packet, it ends. Otherwise, the process samples (at 815) the packet, sends (at 820) the sampled data to a specified location, and then ends. In some embodiments, the sampled data includes the entire packet, while in other embodiments, the sample data includes a portion (e.g., payload) of the packet and/or metadata regarding the packet. In some embodiments, the sample-action flow entry specifies a location for the forwarding element to send data about the sample packet (e.g., the sample packet itself and metadata associated with the sample packet), while in other embodiments, this location is specified by the logical network probe configuration data that the MFE receives or by other configuration data of the MFE.


In some embodiments, the location that the sample-action flow entry sends its sample data is a daemon of the forwarding element that (1) gathers sample action data for one or more logical observation points on the forwarding element, and (2) forwards the collected data periodically or in real time to one or more data collectors in a set of data collectors (e.g., one or more servers) that collect and analyze logical network probe data from various forwarding elements. One example of such a daemon is a daemon of an OVS software switching element of some embodiments of the invention. This daemon will be further described below by reference to FIG. 9.


In some embodiments, an exporter process in the daemon maintains a flow cache containing data aggregated from the data received from the sample-action flow entry. The flow cache data is aggregated by flows, i.e. all data received for packets in a given same flow are aggregated into the same flow cache entry. Flow cache entries may optionally include packet header elements identifying the flow (e.g. the IP source and destination address, protocol number, and source and destination transport ports), as well as aggregated statistics about the flow (e.g. the timestamps of the first and last sampled packets in the flow, the number of sampled packets, and the number of sampled bytes). Data from the flow cache entries is extracted from the cache and sent to collectors by the exporter process, depending on its caching policy.


In some embodiments, the logical network probe configuration data sent to the MFE contains parameters to configure the exporter process's caching policy. Those parameters may include the set addresses of collectors to send aggregated flow data to, and parameters controlling the maximum period to cache each flow cache data tuple before sending it to collectors. Flow caching parameters may include an active timeout, i.e. the maximum time period a flow cache entry can remain in the flow cache since its first packet was sampled, an idle timeout, i.e. the maximum time period a flow cache entry can remain in the flow cache after its last packet was sampled, and a maximum number of flow entries in the cache.


In some embodiments, the location that the sample-action flow entry sends its sample data is a data collector. In other words, the sample-action flow entry in some embodiments directs the forwarding element to send the sample packet data directly to the set of data collectors. In some embodiments, some sample-action flow entries send the sample-packet data to the MFE daemon, while other sample-action flow entries send the sample-packet data directly to the data collector set.


When the forwarding element forwards the sample-packet data to the location specified in the sample-action flow entry, the forwarding element also forwards the logical network probe identifier that is specified in the sample-action flow entry. The MFE daemons and the data collectors use the logical network probe identifiers to aggregate and/or analyze the data that they receive for the logical network probes. For instance, the logical network probe identifier can be used by collectors to differentiate the monitoring statistics received from a single forwarding element for multiple probes defined on the forwarding element, and to aggregate the statistics received from multiple forwarding elements for a single probe instantiated on multiple forwarding element.



FIG. 9 conceptually illustrates a software-switching element 905 of some embodiments that is implemented in a host computing device. In this example, the software-switching element 905 includes (1) an Open vSwitch (OVS) kernel module 920 with a bridge 908 in a hypervisor kernel space, and (2) an OVS daemon 940 and an OVS database server 945 in the hypervisor user space 950. The hypervisor is a software abstraction layer that runs either on top of the host's operating system or on bare metal (just on top of the host's hardware). In this example, two VMs 902 and 904 execute on top of the hypervisor and connect to the kernel module 920, as further described below. In some embodiments, the user space 950 and kernel space 920 are the user space and kernel space of a dom 0 (domain 0) VM that executes on the hypervisor.


As shown, the user space 950 includes the OVS daemon 940 and the OVS database server 945. The OVS daemon 940 is an application that runs in the background of the user space. This daemon includes a flow generator 910, user space flow entries 915, and an exporter daemon 965. In some embodiments, the OVS daemon 940 communicates with the network controller using OpenFlow Protocol. The OVS daemon 940 of some embodiments receives switch configuration from the network controller 115 (in a network controller cluster) and the OVS database server 945. The management information includes bridge information, and the switch configuration includes various flows. These flows are stored in a flow table 915 of the OVS daemon 940. Accordingly, the software-switching element 905 may be referred to as a managed forwarding element.


In some embodiments, the exporter daemon 965 receives sample data from the bridge 908, aggregates this data, analyzes this data, and forwards the analyzed data periodically to one or more data collectors. For instance, for each sampled flow, the exporter daemon analyzes multiple sample data sets to produce the following data tuple for reporting to the data collector set 175: flow's five tuple, packet count, byte count, start timestamp, and end time stamp. In other embodiments, the exporter daemon 965 relays the sample data in real time to one or more data collectors.


In some embodiments, the OVS database server 945 communicates with the network controller 115 and the OVS daemon 940 through a database communication protocol (e.g., OVSDB (OVS database) protocol). The database protocol of some embodiments is a JavaScript Object Notation (JSON) remote procedure call (RPC) based protocol. The OVS database server 945 is also an application that runs in the background of the user space. The OVS database server 945 of some embodiments communicates with the network controller 115 in order to configure the OVS switching element (e.g., the OVS daemon 940 and/or the OVS kernel module 920). For instance, the OVS database server 945 receives management information from the network controller 115 for configuring the bridge 908, ingress ports, egress ports, QoS configurations for ports, etc., and stores the information in a set of databases.


As illustrated in FIG. 9, the OVS kernel module 920 processes and routes network data (e.g., packets) between VMs running on the host and network hosts external to the host (i.e., network data received through the host's NICs). For example, the OVS kernel module 920 of some embodiments routes packets between VMs running on the host and network hosts external to the host coupled the OVS kernel module 920 through the bridge 908.


In some embodiments, the bridge 908 manages a set of rules (e.g., flow entries) that specify operations for processing and forwarding packets. The bridge 908 communicates with the OVS daemon 940 in order to process and forward packets that the bridge 908 receives. For instance, the bridge 908 receives commands, from the network controller 115 via the OVS daemon 940, related to processing and forwarding of packets.


In the example of FIG. 9, the bridge 908 includes a packet processor 930, a classifier 960, and an action processor 935. The packet processor 930 receives a packet and parses the packet to strip header values. The packet processor 930 can perform a number of different operations. For instance, in some embodiments, the packet processor 930 is a network stack that is associated with various network layers to differently process different types of data that it receives. Irrespective of all the different operations that it can perform, the packet processor 930 passes the header values to the classifier 960.


The classifier 960 accesses the datapath cache 962 to find matching flows for different packets. The datapath cache 962 contains any recently used flows. The flows may be fully specified, or may contain one or more match fields that are wildcarded. When the classifier 960 receives the header values, it tries to find a flow or rule installed in the datapath cache 962. If it does not find one, then the control is shifted to the OVS daemon 940, so that it can search for a matching flow entry in the user space flow entry table 915.


If the classifier 960 finds a matching flow, the action processor 935 receives the packet and performs a set of action that is associated with the matching flow. The action processor 935 of some embodiment also receives, from the OVS daemon 940, a packet and a set of instructions to perform on the packet. For instance, when there is no matching flow in the datapath cache 962, the packet is sent to the OVS daemon 940. In some embodiments, the OVS daemon 940 can generate a flow and install that flow in the datapath cache 962. The OVS daemon 940 of some embodiments can also send the packet to the action processor 935 with the set of actions to perform on that packet.


In some embodiments, the action processor 935 performs the sampling associated with a sample-action flow entry that is stored in the user space entries 915 or the datapath cache 962. After performing this sampling, the action processor 935 sends the sampled data to a location that is identified (1) in the sample action flow entry in some embodiments, (2) in the logical network probe configuration stored by the switch 905 in other embodiments, or (3) in some other configuration storage of the switch 905 in still other embodiments. In some embodiments, this location is the exporter daemon 965 of the forwarding element for some or all of the sample-action flow entries. As mentioned, the exporter daemon relays the sample data it receives in real time to the data collector set in some embodiments, while this daemon analyze this data with other similar sample data to produce analysis data that it provides periodically to the data collector set in other embodiments. In some embodiments, the action processor 935 sends the sampled data directly to the data collector set for some or all of the sample-action flow entries.


The OVS daemon 940 of some embodiments includes a datapath flow generator 910. The datapath flow generator 910 is a component of the software switching element 905 that makes switching decisions. Each time there is a miss in the datapath cache 962, the datapath flow generator 940 generates a new flow to install in the cache. In some embodiments, the datapath flow generator works in conjunction with its own separate classifier (not shown) to find one or more matching flows from a set of one or more flow table 915. However, different from the classifier 960, the OVS daemon's classifier can perform one or more resubmits. That is, a packet can go through the daemon's classifier multiple times to find several matching flows from one or more flow table 915. When multiple matching flows are found, the datapath flow generator 910 of some embodiments generates one consolidated flow entry to store in the datapath cache 962.



FIG. 10 conceptually illustrates an electronic system 1000 with which some embodiments of the invention are implemented. The electronic system 1000 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), or any other sort of electronic device. This electronic system can be the network controller or a host computing device that executes some embodiments of the invention. As shown, the electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Specifically, the electronic system 1000 includes a bus 1005, processing unit(s) 1010, a system memory 1025, a read-only memory 1030, a permanent storage device 1035, input devices 1040, and output devices 1045.


The bus 1005 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1000. For instance, the bus 1005 communicatively connects the processing unit(s) 1010 with the read-only memory 1030, the system memory 1025, and the permanent storage device 1035. From these various memory units, the processing unit(s) 1010 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.


The read-only-memory (ROM) 1030 stores static data and instructions that are needed by the processing unit(s) 1010 and other modules of the electronic system. The permanent storage device 1035, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 1000 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1035.


Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 1035, the system memory 1025 is a read-and-write memory device. However, unlike storage device 1035, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1025, the permanent storage device 1035, and/or the read-only memory 1030. From these various memory units, the processing unit(s) 1010 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.


The bus 1005 also connects to the input and output devices 1040 and 1045. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 1040 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1045 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.


Finally, as shown in FIG. 10, bus 1005 also couples electronic system 1000 to a network 1065 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 1000 may be used in conjunction with the invention.


Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.


While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.


As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer or electronic device. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, this specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DNCs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.


VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.


Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc. One of ordinary skill in the art will recognize that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.


Also, a number of the figures (e.g., FIGS. 4, 6, 7, and 8) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Therefore, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. A method of gathering data to perform traffic analysis between data compute nodes (DCNs) executing on host computers in a datacenter and associated with a logical network connecting the DCNs, the method comprising: defining a unique identifier for a logical network probe;associating the logical network probe with at least one logical observation point in the logical network, the logical network implemented over a physical network of the datacenter comprising a plurality of managed forwarding elements, the logical observation point corresponding to a plurality of different physical observation points on at least two host computers associated with a set of at least two managed forwarding elements executing on the at least two host computers along with at least two DCNs;generating data for a sample-action flow entry in a logical processing pipeline associated with the logical observation point; anddistributing the sample-action flow entry data to the set of at least two managed forwarding elements in the physical network that are for processing data packets associated with the logical observation point, each managed forwarding element in the set for using a sample-action flow entry to identify packets for sampling.
  • 2. The method of claim 1, wherein once a managed forwarding element determines that a packet matches a sample-action flow entry and should be sampled, the managed forwarding element samples the packet and sends the sampled data to a location for forwarding data regarding the sampled packets to a set of data collectors for analyzing the forwarded data.
  • 3. The method of claim 2, wherein the location is a daemon of the managed forwarding element.
  • 4. The method of claim 3, wherein the daemon aggregates the sample data from a plurality of matched packets and analyzes the aggregated data before forwarding analysis data to the set of data collectors.
  • 5. The method of claim 2, wherein the sampled data includes the unique logical probe identifier so that the data collector set can analyze the forwarded data for the logical probe.
  • 6. The method of claim 1, wherein each sample-action flow entry includes the unique logical network probe identifier and a set of matching criteria that identifies the packets that match the flow entry.
  • 7. The method of claim 6, wherein each sample-action flow entry further includes a sampling percentage that directs the managed forwarding element that processes the flow entry to only sample the percentage of the packets that satisfy the matching criteria.
  • 8. The method of claim 1, wherein once a managed forwarding element determines that a packet matches a sample-action flow entry and should be sampled, the managed forwarding element samples the packet and sends the sampled data to a set of data collectors for analyzing the sampled packet data for the logical network.
  • 9. The method of claim 8, wherein the sampled data includes the unique logical probe identifier so that the data collector set can analyze the sample data as being part of the logical probe.
  • 10. The method of claim 1, wherein the sample-action flow entry data includes a set of sample-action flow entries.
  • 11. The method of claim 1, wherein the sample-action flow entry data includes data for use by the set of at least two managed forwarding elements to produce sample-action flow entries.
  • 12. The method of claim 1, wherein the logical observation point comprises a packet modification point in the logical processing pipeline for modifying the packets processed.
  • 13. A non-transitory machine readable medium storing a program for configuring managed forwarding elements to gather data regarding packets sent between data compute nodes (DCNs) executing on host computers in a datacenter and processed through a logical network connecting the DCNs, the program comprising sets of instructions for: defining a logical network probe, with a unique identifier, and associating the logical network probe with at least one logical observation point in the logical network, the logical network implemented over a physical network of the datacenter comprising a plurality of managed forwarding elements, the logical observation point corresponding to a plurality of different physical observation points on at least two host computers associated with a set of at least two managed forwarding elements executing on the at least two host computers along with at least two DCNs;generating data for programming a set of managed forwarding elements that implement a set of logical network entities that are associated with the logical observation point; anddistributing the programming data to the set of at least two managed forwarding elements in the physical network, each managed forwarding element in the set for using the programming data to sample packets associated with the logical observation point.
  • 14. The machine readable medium of claim 13, wherein from the programming data, each managed forwarding element stores at least one sample-action flow entry for identifying packets for sampling.
  • 15. The machine readable medium of claim 14, wherein each sample-action flow entry includes the unique logical network probe identifier and a set of matching criteria that identifies the packets that match the flow entry.
  • 16. The machine readable medium of claim 15, wherein each sample-action flow entry further includes a sampling percentage that directs the managed forwarding element that processes the flow entry to only sample the percentage of the packets that satisfy the matching criteria.
  • 17. The machine readable medium of claim 14, wherein once a managed forwarding element determines that a packet matches a sample-action flow entry and should be sampled, the managed forwarding element samples the packet and sends the sampled data to a location for aggregating and analyzing the sampled data and producing analysis data for a set of data collectors for aggregating and analyzing the analysis data, and the sampled data includes the unique logical probe identifier so that the data collector set can receive the unique logical probe identifier with the analysis data and thereby analyze the analysis data as being part of the logical probe.
  • 18. The machine readable medium of claim 17, wherein the location is a daemon of the managed forwarding element, wherein the daemon gathers and analyzes the sample data from a plurality of matched packets before forwarding analysis data to the set of data collectors.
  • 19. The machine readable medium of claim 14, wherein once a managed forwarding element determines that a packet matches a sample-action flow entry and should be sampled, the managed forwarding element samples the packet and sends the sampled data to a set of data collectors for analyzing the sampled packet data for the logical network, the sampled data includes the unique logical probe identifier so that the data collector set can analyze the sample data as being part of the logical probe.
  • 20. The machine readable medium of claim 13, wherein the logical observation point comprises a packet modification point for modifying the packets processed.
  • 21. The machine readable medium of claim 13, wherein the logical observation point comprises an address translation point for changing addresses that are contained in the packets.
  • 22. The machine readable medium of claim 13, wherein the logical observation point comprises a firewall point for examining packets to determine whether the packets should be filtered.
US Referenced Citations (253)
Number Name Date Kind
5224100 Lee et al. Jun 1993 A
5245609 Ofek et al. Sep 1993 A
5265092 Soloway et al. Nov 1993 A
5504921 Dev et al. Apr 1996 A
5550816 Hardwick et al. Aug 1996 A
5729685 Chatwani et al. Mar 1998 A
5751967 Raab et al. May 1998 A
5781534 Perlman et al. Jul 1998 A
6104699 Holender et al. Aug 2000 A
6104700 Haddock et al. Aug 2000 A
6141738 Munter et al. Oct 2000 A
6219699 McCloghrie et al. Apr 2001 B1
6430160 Smith et al. Aug 2002 B1
6456624 Eccles et al. Sep 2002 B1
6512745 Abe et al. Jan 2003 B1
6539432 Taguchi et al. Mar 2003 B1
6658002 Ross et al. Dec 2003 B1
6680934 Cain Jan 2004 B1
6721334 Ketcham Apr 2004 B1
6785843 McRae et al. Aug 2004 B1
6941487 Balakrishnan et al. Sep 2005 B1
6963585 Le Pennec et al. Nov 2005 B1
6999454 Crump Feb 2006 B1
7013342 Riddle Mar 2006 B2
7062559 Yoshimura et al. Jun 2006 B2
7079544 Wakayama et al. Jul 2006 B2
7180856 Breslau et al. Feb 2007 B1
7197572 Matters et al. Mar 2007 B2
7200144 Terrell et al. Apr 2007 B2
7209439 Rawlins et al. Apr 2007 B2
7243143 Bullard Jul 2007 B1
7283473 Arndt et al. Oct 2007 B2
7342916 Das et al. Mar 2008 B2
7391771 Orava et al. Jun 2008 B2
7450598 Chen et al. Nov 2008 B2
7463579 Lapuh et al. Dec 2008 B2
7478173 Delco Jan 2009 B1
7483370 Dayal et al. Jan 2009 B1
7555002 Arndt et al. Jun 2009 B2
7577131 Joseph et al. Aug 2009 B2
7590133 Hatae et al. Sep 2009 B2
7602723 Mandato Oct 2009 B2
7606260 Oguchi et al. Oct 2009 B2
7627692 Pessi Dec 2009 B2
7633955 Saraiya et al. Dec 2009 B1
7639625 Kaminsky et al. Dec 2009 B2
7643488 Khanna et al. Jan 2010 B2
7649851 Takashige et al. Jan 2010 B2
7706266 Plamondon Apr 2010 B2
7710874 Balakrishnan et al. May 2010 B2
7729245 Breslau et al. Jun 2010 B1
7760735 Chen et al. Jul 2010 B1
7764599 Doi et al. Jul 2010 B2
7792987 Vohra et al. Sep 2010 B1
7802000 Huang et al. Sep 2010 B1
7808919 Nadeau et al. Oct 2010 B2
7808929 Wong et al. Oct 2010 B2
7818452 Matthews et al. Oct 2010 B2
7826482 Minei et al. Nov 2010 B1
7839847 Nadeau et al. Nov 2010 B2
7885276 Lin Feb 2011 B1
7936770 Frattura et al. May 2011 B1
7937438 Miller et al. May 2011 B1
7937492 Kompella et al. May 2011 B1
7948986 Ghosh et al. May 2011 B1
7953865 Miller et al. May 2011 B1
7991859 Miller et al. Aug 2011 B1
7995483 Bayar et al. Aug 2011 B1
8024478 Patel Sep 2011 B2
8027354 Portolani et al. Sep 2011 B1
8031606 Memon et al. Oct 2011 B2
8031633 Bueno et al. Oct 2011 B2
8046456 Miller et al. Oct 2011 B1
8054832 Shukla et al. Nov 2011 B1
8055789 Richardson et al. Nov 2011 B2
8060875 Lambeth Nov 2011 B1
8131852 Miller et al. Mar 2012 B1
8149737 Metke et al. Apr 2012 B2
8155028 Abu-Hamdeh et al. Apr 2012 B2
8161270 Parker et al. Apr 2012 B1
8166201 Richardson et al. Apr 2012 B2
8199750 Schultz et al. Jun 2012 B1
8223668 Allan et al. Jul 2012 B2
8224931 Brandwine et al. Jul 2012 B1
8224971 Miller et al. Jul 2012 B1
8254273 Kaminsky et al. Aug 2012 B2
8265062 Tang et al. Sep 2012 B2
8265075 Pandey Sep 2012 B2
8281067 Stolowitz Oct 2012 B2
8290137 Yurchenko et al. Oct 2012 B2
8306043 Breslau et al. Nov 2012 B2
8312129 Miller et al. Nov 2012 B1
8339959 Moisand et al. Dec 2012 B1
8339994 Gnanasekaran et al. Dec 2012 B2
8345558 Nicholson et al. Jan 2013 B2
8351418 Zhao et al. Jan 2013 B2
8359576 Prasad et al. Jan 2013 B2
8456984 Ranganathan et al. Jun 2013 B2
8504718 Wang et al. Aug 2013 B2
8565108 Marshall et al. Oct 2013 B1
8571031 Davies et al. Oct 2013 B2
8611351 Gooch et al. Dec 2013 B2
8612627 Brandwine Dec 2013 B1
8625594 Safrai et al. Jan 2014 B2
8625603 Ramakrishnan et al. Jan 2014 B1
8625616 Vobbilisetty et al. Jan 2014 B2
8644188 Brandwine et al. Feb 2014 B1
8645952 Biswas et al. Feb 2014 B2
8750288 Nakil et al. Jun 2014 B2
8762501 Kempf et al. Jun 2014 B2
8806005 Miri et al. Aug 2014 B2
8837300 Nedeltchev et al. Sep 2014 B2
8838743 Lewites et al. Sep 2014 B2
8929221 Breslau et al. Jan 2015 B2
9059926 Akhter et al. Jun 2015 B2
9197529 Ganichev et al. Nov 2015 B2
9226220 Banks et al. Dec 2015 B2
9280448 Farrell et al. Mar 2016 B2
9282019 Ganichev et al. Mar 2016 B2
9344349 Ganichev et al. May 2016 B2
9407580 Ganichev et al. Aug 2016 B2
9602334 Benny Mar 2017 B2
9860151 Ganichev et al. Jan 2018 B2
9898317 Nakil et al. Feb 2018 B2
10044581 Russell Aug 2018 B1
10181993 Ganichev et al. Jan 2019 B2
10200306 Nhu et al. Feb 2019 B2
20010020266 Kojima et al. Sep 2001 A1
20010043614 Viswanadham et al. Nov 2001 A1
20020093952 Gonda Jul 2002 A1
20020194369 Rawlins et al. Dec 2002 A1
20030041170 Suzuki Feb 2003 A1
20030058850 Rangarajan et al. Mar 2003 A1
20040073659 Rajsic et al. Apr 2004 A1
20040098505 Clemmensen May 2004 A1
20040186914 Shimada Sep 2004 A1
20040267866 Carollo et al. Dec 2004 A1
20040267897 Hill et al. Dec 2004 A1
20050018669 Arndt et al. Jan 2005 A1
20050027881 Figueira et al. Feb 2005 A1
20050053079 Havala Mar 2005 A1
20050083953 May Apr 2005 A1
20050111445 Wybenga et al. May 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050132044 Guingo et al. Jun 2005 A1
20050182853 Lewites et al. Aug 2005 A1
20050220096 Friskney et al. Oct 2005 A1
20050232230 Nagami et al. Oct 2005 A1
20060002370 Rabie et al. Jan 2006 A1
20060026225 Canali et al. Feb 2006 A1
20060028999 Iakobashvilli et al. Feb 2006 A1
20060029056 Perera et al. Feb 2006 A1
20060037075 Frattura et al. Feb 2006 A1
20060174087 Hashimoto et al. Aug 2006 A1
20060187908 Shimozono et al. Aug 2006 A1
20060193266 Siddha et al. Aug 2006 A1
20060206655 Chappell et al. Sep 2006 A1
20060218447 Garcia et al. Sep 2006 A1
20060221961 Basso et al. Oct 2006 A1
20060282895 Rentzis et al. Dec 2006 A1
20060291388 Amdahl et al. Dec 2006 A1
20070050763 Kagan et al. Mar 2007 A1
20070055789 Claise et al. Mar 2007 A1
20070064673 Bhandaru et al. Mar 2007 A1
20070097982 Wen et al. May 2007 A1
20070156919 Potti et al. Jul 2007 A1
20070260721 Bose et al. Nov 2007 A1
20070286185 Eriksson et al. Dec 2007 A1
20070297428 Bose et al. Dec 2007 A1
20080002579 Lindholm et al. Jan 2008 A1
20080002683 Droux et al. Jan 2008 A1
20080049614 Briscoe et al. Feb 2008 A1
20080049621 McGuire et al. Feb 2008 A1
20080049786 Ram et al. Feb 2008 A1
20080059556 Greenspan et al. Mar 2008 A1
20080071900 Hecker et al. Mar 2008 A1
20080086726 Griffith et al. Apr 2008 A1
20080112551 Forbes et al. May 2008 A1
20080159301 De Heer Jul 2008 A1
20080240095 Basturk Oct 2008 A1
20090010254 Shimada Jan 2009 A1
20090100298 Lange et al. Apr 2009 A1
20090109973 Ilnicki Apr 2009 A1
20090150527 Tripathi et al. Jun 2009 A1
20090292858 Lambeth et al. Nov 2009 A1
20100128623 Dunn et al. May 2010 A1
20100131636 Suri et al. May 2010 A1
20100188976 Rahman Jul 2010 A1
20100214949 Smith et al. Aug 2010 A1
20100232435 Jabr et al. Sep 2010 A1
20100254385 Sharma et al. Oct 2010 A1
20100275199 Smith et al. Oct 2010 A1
20100306408 Greenberg et al. Dec 2010 A1
20110022695 Dalal et al. Jan 2011 A1
20110075664 Lambeth et al. Mar 2011 A1
20110085557 Gnanasekaran et al. Apr 2011 A1
20110085559 Chung et al. Apr 2011 A1
20110085563 Kotha et al. Apr 2011 A1
20110128959 Bando et al. Jun 2011 A1
20110137602 Desineni Jun 2011 A1
20110194567 Shen Aug 2011 A1
20110202920 Takase Aug 2011 A1
20110261825 Ichino Oct 2011 A1
20110299413 Chatwani et al. Dec 2011 A1
20110299534 Koganti et al. Dec 2011 A1
20110299537 Saraiya et al. Dec 2011 A1
20110305167 Koide Dec 2011 A1
20110317559 Kern et al. Dec 2011 A1
20110317696 Aldrin et al. Dec 2011 A1
20120079478 Galles et al. Mar 2012 A1
20120159454 Barham et al. Jun 2012 A1
20120182992 Cowart et al. Jul 2012 A1
20120275331 Benko Nov 2012 A1
20120287791 Xi et al. Nov 2012 A1
20130024579 Zhang et al. Jan 2013 A1
20130031233 Feng et al. Jan 2013 A1
20130041934 Annamalaisami Feb 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130054761 Kempf et al. Feb 2013 A1
20130058346 Sridharan et al. Mar 2013 A1
20130067067 Miri et al. Mar 2013 A1
20130125120 Zhang et al. May 2013 A1
20130163427 Beliveau et al. Jun 2013 A1
20130163475 Beliveau et al. Jun 2013 A1
20130332602 Nakil et al. Dec 2013 A1
20130332983 Koorevaar et al. Dec 2013 A1
20130339544 Mithyantha Dec 2013 A1
20140019639 Ueno Jan 2014 A1
20140029451 Nguyen Jan 2014 A1
20140115578 Cooper et al. Apr 2014 A1
20140119203 Sundaram et al. May 2014 A1
20140126418 Brendel et al. May 2014 A1
20140157405 Joll Jun 2014 A1
20140177633 Manula et al. Jun 2014 A1
20140195666 Dumitriu et al. Jul 2014 A1
20140207926 Benny Jul 2014 A1
20140219086 Cantu′ Aug 2014 A1
20140281030 Cui et al. Sep 2014 A1
20140282823 Rash Sep 2014 A1
20140304393 Annamalaisami Oct 2014 A1
20150016286 Ganichev et al. Jan 2015 A1
20150016287 Ganichev et al. Jan 2015 A1
20150016298 Ganichev et al. Jan 2015 A1
20150016469 Ganichev Jan 2015 A1
20150180755 Zhang et al. Jun 2015 A1
20150281036 Sun et al. Oct 2015 A1
20160149791 Ganichev et al. May 2016 A1
20160226741 Ganichev et al. Aug 2016 A1
20170358111 Madsen Dec 2017 A1
20180102959 Ganichev et al. Apr 2018 A1
20180262447 Nhu et al. Sep 2018 A1
20180262594 Nhu et al. Sep 2018 A1
20190109769 Jain et al. Apr 2019 A1
Foreign Referenced Citations (9)
Number Date Country
1154601 Nov 2001 EP
2002141905 May 2002 JP
2003069609 Mar 2003 JP
2003124976 Apr 2003 JP
2003318949 Nov 2003 JP
9506989 Mar 1995 WO
2012126488 Sep 2012 WO
2013184846 Dec 2013 WO
2015005968 Jan 2015 WO
Non-Patent Literature Citations (6)
Entry
Non-Published Commonly Owned U.S. Appl. No. 15/588,727, filed May 8, 2017, 56 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 15/588,746, filed May 8, 2017, 54 pages, Nicira, Inc.
Phaal, Peter, et al., “sFlow Version 5,” Jul. 2004, 46 pages, sFlow.org.
Phan, Doantam, et al., “Visual Analysis of Network Flow Data with Timelines and Event Plots,” month unknown, 2007, 16 pages, VizSEC.
Non-Published commonly Owned U.S. Appl. No. 15/726,789, filed Oct. 6, 2017, 32 pages, Nicira Inc.
Non-Published commonly Owned U.S. Appl. No. 15/838,355, filed Dec. 11, 2017, 126 pages, Nicira Inc.
Related Publications (1)
Number Date Country
20160105333 A1 Apr 2016 US
Provisional Applications (1)
Number Date Country
62062768 Oct 2014 US