Today, in the software-defined network (SDN) world, logical elements are configured on top of physical constructs. From a troubleshooting perspective, network administrators would ideally like to co-relate data plane constructs to management plane constructs. However, this information is not readily available in a visual format, and customers often spend a lot of time manually associating this information. Additionally, as the network configuration evolves, customers must also continue manually updating correlations between these constructs. The correlation information is also critical from a configuration perspective such that if a network administrator cannot identify or locate current correlation information, specifically in a visual manner, configuration changes to the network can become difficult. The correlation information can also be helpful for network administrators when troubleshooting network issues (e.g., data message drops).
Some embodiments of the invention provide a visualization of the topology of a logical network that is implemented within a physical network. A method of some embodiments identifies (i) a set of logical elements of the logical network and (ii) for each logical element in the set of logical elements, a set of one or more physical elements of the physical network that implements the logical element. Through a user interface (UI), the method displays a visualization that includes (i) nodes representing the set of logical elements, (ii) connections between the logical elements, (iii) nodes representing the sets of physical elements that implement each logical element, and (iv) correlations between the node representing each logical element and the node(s) representing each set of physical elements that implements that logical element.
In some embodiments, the set of logical elements are organized hierarchically by type of logical element in the visualization, with logical elements that provide a connection to networks external to the logical network displayed at the top of the hierarchy, logical elements that are logical network endpoints displayed at the bottom of the hierarchy, and additional logical elements displayed between the top and the bottom of the hierarchy. Some such embodiments display the logical elements in a pyramid with the sets of physical elements displayed alongside the set of logical elements on the left and right sides of the pyramid. The correlations, in some such embodiments, are displayed as dashed lines between each node representing a logical element and one or more nodes representing the set of physical elements implementing the logical element.
In some embodiments, at least one set of physical elements that implements a particular logical element is represented as a group node indicating a type of physical element and a number of the type of physical element in the physical network that implement the logical element. The group node, in some embodiments, is used when the number of physical elements implementing a particular logical element exceeds a specified threshold value (e.g., five physical elements). For instance, a logical switch might be implemented by a large number (e.g., hundreds or thousands) of software forwarding elements executing on host computers; rather than displaying such a large number of nodes representing the different host computers in the visualization, a single node is displayed that indicates the number of host computers. These group nodes are selectable in some embodiments to cause the visualization to display individual nodes representing the individual members of the group, in order for a user to determine additional information about the individual physical elements. In some embodiments, when the number of physical elements implementing the particular logical element does not exceed the specified threshold value, each physical element is represented in the visualization by an individual node with a dashed line to the particular logical element indicating a correlation between the physical element and the particular logical element.
Similarly, groups of logical elements are represented by a group node in the visualization, according to some embodiments. For example, in some embodiments, when the number of Tier-1 gateways attached to the same Tier-0 gateway exceeds a specified threshold, the Tier-1 gateways are represented in the visualization using a group node. In some embodiments, data compute nodes (e.g., virtual machines (VMs), containers, and physical servers) attached to a logical switch are always displayed as a group node. As described for the group node representing physical elements, selecting the group node representing logical elements can cause the visualization to display nodes representing the individual logical elements (e.g., VMs) that are represented by the group node.
In some embodiments, the visualization displays a first set of nodes with a first appearance (e.g., a first color) and a second set of nodes with a second appearance (e.g., a second color), and the second set of nodes can be selected in a particular manner (e.g., by hovering a cursor over a node in the second set of nodes) to cause the visualization to display a pop-up window that includes information regarding the hovered-over node (e.g., the name of the element represented by the node, the type of logical or physical element represented by the node, etc.).
Some embodiments also include additional information that is specific to the type of element represented by the node. For example, in some embodiments, the information displayed for a Tier-0 gateway specifies whether the gateway is configured in active-active or active-standby mode, while the information for a Tier-1 gateway specifies whether the failover mode for the gateway is preemptive or non-preemptive (i.e., whether a preferred gateway is always active when it is available). For L2 segments, the information in some embodiments specifies whether the segment is a logical switch (i.e., an overlay segment within the logical network) or is a VLAN segment (e.g., for connecting uplinks to external networks), as well as whether the segment is connected to more than one gateway and the number of gateways to which it is connected. In some such embodiments, either type of node can also be selected in a different manner to cause the visualization to display additional information about the represented logical or physical element represented by the selected node.
Examples of the logical elements include different types of gateway logical routers, logical switches, and VMs, while the physical elements, in some embodiments, include host computers on which the VMs or other data compute nodes (i.e., logical network endpoints) execute and which implement logical switches and/or distributed logical routers, as well as physical machines such as edge devices that implement gateway logical routers (specifically, the centralized routing components of logical routers in some embodiments).
Each host computer for hosting the data compute nodes, in some embodiments, executes a managed forwarding element (operating, e.g., within virtualization software of the host machine) that implements the logical networks for the data compute nodes that reside on the host computer. Thus, for example, the managed forwarding element will implement the logical switches to which its data compute nodes attach, as well as distributed routing components of the logical routers to which these logical switches attach, other logical switches attached to those distributed routing components, etc. Logical routers may include centralized routing components (e.g., for providing stateful services and/or connecting to external networks), which are implemented on a separate physical edge device (e.g., as a VM or within a forwarding element datapath of the edge device). The forwarding elements of these hosts may also implement the various logical switches and distributed routing components as needed.
When the same edge device implements multiple gateway logical routers, some embodiments represent the edge device with a single node in the visualization with dashed lines from this node to each gateway logical router implemented by the node. Similarly, when a particular gateway logical router is implemented by multiple edge nodes (but fewer than the threshold value for grouping nodes), some embodiments display dashed lines from each edge device to the particular gateway logical router in some embodiments. It should also be noted that, in many cases, the host computers implementing a particular logical switch will also implement a distributed logical router associated with the gateway logical router to which that switch connects and, conversely, the edge device(s) implementing a gateway logical router also implement the logical switch(es) connected to that gateway logical router.
In addition to providing a visualization of the overall network topology, some embodiments also provide an option for users to perform flow tracing for data message flows between logical network endpoints. When a user initiates (i.e., through the UI) flow tracing for a particular data message flow (e.g., between two VMs), some embodiments perform the flow tracing operation and display a visualization of the path traversed by the data message flow through the logical network. In some embodiments, the path is represented by a hierarchically organized pyramid with a first node representing the source VM shown at the bottom left and a second node representing the destination VM. Any logical elements (generally at least one logical switch, and possibly one or more logical router) through which the data message flow logically traverses are displayed in a hierarchical manner.
Additionally, nodes representing physical elements that implement the logical elements in the pyramid are shown in the visualization on the left and right sides of the pyramid, with dashed lines between nodes representing each physical element and nodes representing the logical elements implemented by the physical element. In some embodiments, the visualization also includes representations of tunnels, with tunnels that have not experienced issues appearing in a first color (e.g., green) and tunnels that have experienced issues appearing in a second color (e.g., red). The visualization also depicts both north-south traffic (e.g., traffic between a VM and an edge of the network that connects to external networks) as well as east-west traffic, according to some embodiments.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, the Detailed Description, the Drawings, and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, the Detailed Description, and the Drawings.
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments of the invention provide a visualization of the topology of a logical network that is implemented within a physical network. A method of some embodiments identifies (i) a set of logical elements of the logical network and (ii) for each logical element in the set of logical elements, a set of one or more physical elements of the physical network that implements the logical element. Through a user interface (UI), the method displays a visualization that includes (i) nodes representing the set of logical elements, (ii) connections between the logical elements, (iii) nodes representing the sets of physical elements that implement each logical element, and (iv) correlations between the node representing each logical element and the node(s) representing each set of physical elements that implements that logical element.
In some embodiments, the set of logical elements are organized hierarchically by type of logical element in the visualization, with logical elements that provide a connection to networks external to the logical network displayed at the top of the hierarchy, logical elements that are logical network endpoints displayed at the bottom of the hierarchy, and additional logical elements displayed between the top and the bottom of the hierarchy. Some such embodiments display the logical elements in a pyramid with the sets of physical elements displayed alongside the set of logical elements on the left and right sides of the pyramid. The correlations, in some such embodiments, are displayed as dashed lines between each node representing a logical element and one or more nodes representing the set of physical elements implementing the logical element.
In some embodiments, at least one set of physical elements that implements a particular logical element is represented as a group node indicating a type of physical element and a number of the type of physical element in the physical network that implement the logical element. The group node, in some embodiments, is used when the number of physical elements implementing a particular logical element exceeds a specified threshold value (e.g., five physical elements). For instance, a logical switch might be implemented by a large number (e.g., hundreds or thousands) of software forwarding elements executing on host computers; rather than displaying such a large number of nodes representing the different host computers in the visualization, a single node is displayed that indicates the number of host computers. These group nodes are selectable in some embodiments to cause the visualization to display individual nodes representing the individual members of the group, in order for a user to determine additional information about the individual physical elements. In some embodiments, when the number of physical elements implementing the particular logical element does not exceed the specified threshold value, each physical element is represented in the visualization by an individual node with a dashed line to the particular logical element indicating a correlation between the physical element and the particular logical element.
Similarly, groups of logical elements are represented by a group node in the visualization, according to some embodiments. For example, in some embodiments, when the number of Tier-1 gateways attached to the same Tier-0 gateway exceeds a specified threshold, the Tier-1 gateways are represented in the visualization using a group node. In some embodiments, data compute nodes (e.g., virtual machines (VMs), containers, and physical servers) attached to a logical switch are always displayed as a group node. As described for the group node representing physical elements, selecting the group node representing logical elements can cause the visualization to display nodes representing the individual logical elements (e.g., VMs) that are represented by the group node.
In some embodiments, the visualization displays a first set of nodes with a first appearance (e.g., a first color) and a second set of nodes with a second appearance (e.g., a second color), and the second set of nodes can be selected in a particular manner (e.g., by hovering a cursor over a node in the second set of nodes) to cause the visualization to display a pop-up window that includes information regarding the hovered-over node (e.g., the name of the element represented by the node, the type of logical or physical element represented by the node, etc.).
Some embodiments also include additional information that is specific to the type of element represented by the node. For example, in some embodiments, the information displayed for a Tier-0 gateway specifies whether the gateway is configured in active-active or active-standby mode, while the information for a Tier-1 gateway specifies whether the failover mode for the gateway is preemptive or non-preemptive (i.e., whether a preferred gateway is always active when it is available). For L2 segments, the information in some embodiments specifies whether the segment is a logical switch (i.e., an overlay segment within the logical network) or is a VLAN segment (e.g., for connecting uplinks to external networks), as well as whether the logical switch is connected to more than one gateway and the number of gateways to which it is connected. In some such embodiments, either type of node can also be selected in a different manner to cause the visualization to display additional information about the represented logical or physical element represented by the selected node.
Examples of the logical elements include different types of gateway logical routers, logical switches, and VMs, while the physical elements, in some embodiments, include host computers on which the VMs or other data compute nodes (i.e., logical network endpoints) execute and which implement logical switches and/or distributed logical routers, as well as physical machines such as edge devices that implement gateway logical routers (specifically, the centralized routing components of logical routers in some embodiments).
Each host computer for hosting the data compute nodes, in some embodiments, executes a managed forwarding element (operating, e.g., within virtualization software of the host machine) that implements the logical networks for the data compute nodes that reside on the host computer. Thus, for example, the managed forwarding element will implement the logical switches to which its data compute nodes attach, as well as distributed routing components of the logical routers to which these logical switches attach, other logical switches attached to those distributed routing components, etc. Logical routers may include centralized routing components (e.g., for providing stateful services and/or connecting to external networks), which are implemented on a separate physical edge device (e.g., as a VM or within a forwarding element datapath of the edge device). The forwarding elements of these hosts may also implement the various logical switches and distributed routing components as needed.
When the same edge device implements multiple gateway logical routers, some embodiments represent the edge device with a single node in the visualization with dashed lines from this node to each gateway logical router implemented by the node. Similarly, when a particular gateway logical router is implemented by multiple edge nodes (but fewer than the threshold value for grouping nodes), some embodiments display dashed lines from each edge device to the particular gateway logical router in some embodiments. It should also be noted that, in many cases, the host computers implementing a particular logical switch will also implement a distributed logical router associated with the gateway logical router to which that switch connects and, conversely, the edge device(s) implementing a gateway logical router also implement the logical switch(es) connected to that gateway logical router.
In addition to providing a visualization of the overall network topology, some embodiments also provide an option for users to perform flow tracing for data message flows between logical network endpoints. When a user initiates (i.e., through the UI) flow tracing for a particular data message flow (e.g., between two VMs), some embodiments perform the flow tracing operation and display a visualization of the path traversed by the data message flow through the logical network. In some embodiments, the path is represented by a hierarchically organized pyramid with a first node representing the source VM shown at the bottom left and a second node representing the destination VM. Any logical elements (generally at least one logical switch, and possibly one or more logical router) through which the data message flow logically traverses are displayed in a hierarchical manner.
Additionally, nodes representing physical elements that implement the logical elements in the pyramid are shown in the visualization on the left and right sides of the pyramid, with dashed lines between nodes representing each physical element and nodes representing the logical elements implemented by the physical element. In some embodiments, the visualization also includes representations of tunnels, with tunnels that have not experienced issues appearing in a first color (e.g., green) and tunnels that have experienced issues appearing in a second color (e.g., red). The visualization also depicts both north-south traffic (e.g., traffic between a VM and an edge of the network that connects to external networks) as well as east-west traffic, according to some embodiments.
As shown, the logical network topology includes a Tier-0 gateway logical router 110, a Tier-1 gateway logical router 120, logical switches 130 and 132, and sets of VMs 140 and 142. Tier-0 gateway logical routers, in some embodiments, provide connections to external networks (e.g., public networks such as the Internet, other logical networks, etc.) for the underlying logical network. Tier-1 gateway logical routers, in some embodiments, segregate different sets of logical switches from each other and, in some cases, provide services for data traffic to and from the logical network endpoints (e.g., VMs) that attach to those logical switches.
The physical network elements implementing the logical network include a set of 6 edge nodes 150 implementing the Tier-0 gateway logical router 110 in active-active mode (i.e., a mode in which the logical gateway is active at all 6 edge nodes), a pair of edge nodes 152a-152b implementing the Tier-1 gateway logical router 120 in active-standby mode (i.e., a mode in which the Tier-1 logical gateway at edge node 152a is active and the Tier-1 logical gateway at edge node 152b is a standby gateway in case of failover), a set of 5 hosts 154 implementing the logical switch 130 and VMs 140, and a set of 10 hosts 156 implementing the logical switch 132 and VMs 142.
The logical network elements are organized hierarchically in a pyramid, as shown, with the network endpoints (in this case, VMs 140-0142) displayed at the bottom corners of the pyramid, and the common logical element through which these different segments of the network communicate (i.e., the Tier-0 gateway logical router 110) at the top center of the pyramid. The physical elements implementing the logical elements are displayed on the left and right sides of the pyramid, with dashed lines between each physical element and the logical element(s) that it implements, as shown. For example, the dashed line 160 represents the correlation between the 6 edge nodes 150 and the Tier-0 gateway logical router 110.
When multiple physical elements represented by multiple nodes implement a single logical element, the correlation is illustrated by dashed lines from each of the physical elements to the single logical element. For example, the dashed lines 162a and 162b represent the correlation between the edge nodes 152a and 152b and the Tier-1 gateway logical router 120.
Similarly, when one physical element (or set of physical elements represented by a group node) implements more than one logical element, the correlations are illustrated by dashed lines from the physical element to each logical element it implements. For instance, the group node representing 5 hosts 154 is shown as implementing the logical switch 130 as well as the VMs 140, and the group node representing 10 hosts 156 is shown as implementing the logical switch 132 as well as the VMs 142. These correlations are represented by the dashed lines 164a and 164b, and 166a and 166b, respectively.
The UI 100 also displays the connections between the logical elements, as well as any connections to networks external to the logical network, as shown. For example, the Tier-0 gateway logical router 110 includes connections 112 to networks external to the logical network (each of these connections representing a different uplink port in some embodiments), as well as connection 114 to the Tier-1 gateway logical router 120 and connection 116 to the logical switch 132 (which attaches directly to the Tier-0 gateway logical router 110, rather than via a Tier-1 logical router). The connections between logical elements are represented using solid lines, which distinguishes these connections from the dashed lines between the physical elements and the logical elements they implement. Other embodiments may represent these connections and correlations in ways other than those shown (e.g., different colors of lines, etc.).
In some embodiments, when the number of a particular type of physical element implementing any particular logical element exceeds a threshold value (e.g., 5 elements), the physical elements of that particular type are represented by a group node. In the UI 100, the 6 edge nodes 150 that implement the Tier-0 gateway logical router 110 are represented by a group node, while the edge nodes 152a and 152b that implement the Tier-1 gateway logical router are represented by individual nodes. When a set of physical elements is represented by a group node, a count indicating the number of physical elements represented by that group node is displayed above the node along with the name of the type of physical element represented, as shown (i.e., 6 edge nodes, 5 hosts, 10 hosts, etc.).
Similarly, when the number of logical elements of a particular type branching off of another logical element exceeds a threshold value, the logical elements of that particular type are represented by a group node, according to some embodiments. As mentioned above, VMs, containers, and physical servers attached to a logical switch are always represented in the UI by a group node, in some embodiments. In the UI 100, the two sets of VMs 140 and 142 are each represented by a respective group node, with the logical element type “VM” indicated along with the number of VMs represented by each of the group nodes (i.e., 25 VMs and 40 VMs). While the counts for each of the logical elements in the group nodes in this example are relatively low, other embodiments may include hundreds of logical elements represented by a group node. In some embodiments, as will be described further below, group nodes (for both logical and physical elements) are selectable, and selecting a group node causes the UI 100 to expand the group node and display all of the elements represented by the group node.
As noted above, the group nodes are selectable in some embodiments. Within the UI 100, selectable items are distinguished from non-selectable items by appearing bolded (e.g., 6 edge nodes, 5 hosts, 10 hosts, 3 services, etc.). In other embodiments, selectable items may be distinguished in a different manner, such as by appearing in a different color than the non-selectable items. For non-selectable nodes, some embodiments of the invention provide information for non-selectable nodes when a user hovers a cursor over the non-selectable node.
In addition to the nodes, connections, and correlations displayed in the UI 100, some embodiments also display indications for services provided by nodes. For example, the Tier-1 gateway logical router node 120 includes a smaller node 122 indicating there are three (3) services provided by the node 120. In some embodiments, hovering over the services node 122, or selecting the services node, causes the UI 100 to display information detailing each of the three services, as will be described further below.
Next, the process identifies, at 220, a set of logical elements in the logical network. For example, the process would identify the gateway logical routers 110 and 120, logical switches 130 and 132, and VMs 140 and 142 displayed in the UI 100. Some embodiments retrieve this information from a database of a network management application that manages the logical network (and possibly many other logical networks).
After identifying the set of logical elements, the process identifies, at 230, sets of physical elements that implement the set of logical elements. In the example UI 100, the process would identify the edge nodes 150 and 152a-152b and hosts 154 and 156. In some embodiments, the database storing logical network information also stores data mapping each logical network element to the physical elements that implement that logical network element.
After all of the logical and physical elements have been identified, the process displays, at 240, through the UI, the topology of the logical network and physical network elements that implement the logical network elements. The process then ends.
Initially, the network visualization application is in state 305, in some embodiments, displaying the network topology (e.g., the UI 100 of
From state 305, a user of the application can perform numerous operations to modify the UI display by hovering a cursor (or performing a similar operation) over any of the nodes shown in the topology. For example, when a user hovers a cursor over a particular node, the network visualization application detects the hovering cursor and transitions to state 310 to display a first set of information for the node over a portion of the network topology.
When the network visualization application detects that the cursor has stopped hovering over the node, the application returns to state 305 (i.e., removing the additional information from the display). In addition, from state 310, the application can receive a selection of the hovered-over node. When the application receives such a selection, it transitions from state 310 to state 315 to display a second set of information for the node. In some embodiments, this second set of information includes the same information displayed when a user hovers (e.g., with a cursor) over the node, while in other embodiments some additional information about the represented element is displayed. From state 315, the application can receive a selection to hide the node information. When the application receives a selection to hide the node information, the application returns to state 305 to display the network topology.
In some embodiments, certain logical network elements (e.g., Tier-0 and/or Tier-1 logical router gateways) can provide various services (e.g., load balancing services, firewall services, network address translation (NAT) services, VPN services, etc.). As shown above in
For example,
When the network visualization application detects that the cursor has stopped hovering over the node representing services, the application returns to state 305 (i.e., removing the additional information from the display). Additionally, from state 320, the application can receive a selection of the hovered-over services. When such a selection is received, the application transitions to state 325 to display a second, more detailed set of information for the selected services. For example, in some embodiments, a context menu is displayed from which a user can select a specific service and view additional details about the specific service in the context of the logical entity that provides the service. From state 325, the application can receive a selection to hide the services information, and as a result, returns to state 305.
Rather than receiving a selection of a node or service after a user has been hovering a cursor over the node or service, the application can receive a selection of a node and/or service directly from state 305 (i.e., without displaying the first set of information in response to detecting a hovering cursor). In these instances, the application transitions directly from state 305 to state 315 or 325, respectively, to display the information, and returns to state 305 upon receiving a selection to hide the information.
In some embodiments, as also mentioned above, the group nodes are selectable. When the application receives a selection to expand a group node, it transitions from state 305 to state 330 to expand the group node to show all elements represented by the group node within the topology, and then returns to state 305. In some embodiments, instead of adjusting the zoom level, the application pans the UI to show the elements of the expanded group node.
For example,
In a second example,
From state 305, after a user has selected to expand a group node, the user can then select to collapse the elements of the expanded group node. When the application receives a selection to collapse an expanded group node, it transitions to state 335 to collapse the elements, and then returns back to state 305.
Also, from state 305, users can select to pan the display and zoom in, or out, from the display. When the application receives a selection to pan the display (e.g., to view additional nodes in the topology after expanding a group node), it transitions to state 340 to pan, and then returns to state 305 with the display modified by the pan operation. Similarly, when the application receives a selection to zoom in or out on part of the display (e.g., to view as many elements of the topology in one display as possible, or to focus on a particular node or group of nodes), the application transitions to state 345 to zoom in or out on the display, and then returns to state 305 with the display modified by the zoom operation.
In addition to providing visualizations of network topologies, the network visualization application in some embodiments also provides users with an option to perform flow tracing for data message flows between logical network endpoints and view a visualization of the path between the logical network endpoints. Like the visualizations described above, the flow tracing visualization in some embodiments illustrates both the logical network elements along the path as well as the physical network elements that implement those logical network elements for packets sent along the path.
Also like the visualizations described above, the flow tracing visualization also organizes the logical network elements in a hierarchical pyramid, with the network endpoints displayed at the bottom left and bottom right of the pyramid, the highest logical element in the hierarchy at the top center (e.g., with Tier-0 logical routers being arranged at the top of the hierarchy and logical switches to which VMs connect at the bottom of the hierarchy), and additional logical elements traversed by the flow displayed in between, according to some embodiments. The physical network elements are displayed in the visualization on the left and right sides of the pyramid, with dashed lines between the physical network components and the logical network components they implement, in some embodiments.
In some embodiments, the visualization is provided in a UI in response to input selecting a source logical network endpoint and a destination logical network endpoint. The logical network endpoints may be VMs or other data compute nodes that are attached to a port of a logical switch, uplink ports of a logical router that represent a connection of the logical network to external networks (e.g., the Internet), or other endpoints. These endpoints may be attached to logical ports on the same logical switch, or different logical switches separated by one or more logical routers.
As described above, the physical network elements, in some embodiments, include host computers on which the VMs or other data compute nodes (i.e., logical network endpoints) operate, as well as physical machines that implement, e.g., centralized routing components of logical routers. Each host machine for hosting the data compute nodes, in some embodiments, executes a managed forwarding element (operating, e.g., within virtualization software of the host machine) that implements the logical networks for the data compute nodes that reside on the host machine. Thus, for example, the managed forwarding element will implement the logical switches to which its data compute nodes attach, as well as distributed routing components of the logical routers to which these logical switches attach, other logical switches attached to those distributed routing components, etc. Logical routers may include centralized routing components (e.g., for providing stateful services), which are implemented on a separate physical machine (e.g., as a VM or within a forwarding element datapath on the physical machine). The forwarding elements of these hosts may also implement the various logical switches and distributed routing components as needed.
In physical networks that use first-hop processing (i.e., the first managed forwarding element to process a packet performs logical processing not only for the first logical switch but also any other distributed logical network elements until the packet needs to be either delivered to its destination or sent to a centralized routing component), the physical network element on which the source endpoint operates may implement multiple logical network elements for packets sent from that endpoint. As with the network topology visualization examples described above, physical network elements that implement multiple logical network elements will be illustrated with dashed lines from the physical network element to each logical network element that it implements, according to some embodiments.
The flow tracing visualization in some embodiments also includes information regarding the packet tracing operation from the source endpoint to the destination endpoint, with a visual linking between the packet tracing information and path visualization. The packet tracing operation of some embodiments injects a trace packet that simulates a packet sent from the source endpoint at the first physical element (e.g., the first hop managed forwarding element operating on the same host computer as a source VM). The physical elements along the path process the trace packet as they would an actual packet sent by the source, but in some embodiments, (i) the packet is not actually delivered to its destination and (ii) the physical elements that process the packets send messages to a centralized controller or manager regarding the processing of the packet (e.g., both logical and physical processing).
The messages sent to the controller in some embodiments may indicate that a forwarding element has performed various actions, such as physical receipt of a packet at a particular port, ingress of a packet to a logical forwarding element, logical forwarding of a packet according to a logical forwarding element, application of a firewall, access control, or other rule for a logical forwarding element to a packet, physical forwarding (e.g., encapsulation and output) by a managed physical forwarding element of a packet, dropping a packet, delivery of a packet to its destination endpoint (which is not actually performed, as mentioned), etc. The display of the packet tracing information, in some embodiments, includes a list of these messages, with each message indicating a type (e.g., drop, forward, deliver, receive), a physical network element that sent the message, and a logical network element to which the message relates (if the message is not a purely physical network action).
The network visualization application in some embodiments always displays the source machine at the bottom left of the pyramid, and the destination machine at the bottom right of the pyramid. In this particular example, the flow tracing was performed for a data message flow between source web VM 840 and destination database VM 842. After leaving the VM 840, data messages of the flow logically travel to the logical switch 830, then to the Tier-0 gateway logical router 810, Tier-1 gateway logical router 820, logical switch 832, and finally to the VM 842.
The physical elements on the left side of the pyramid in the UI 800 include an edge node 850 and a host node 852. The edge node 850 implements the Tier-0 gateway logical router 810 and the logical switch 830, as illustrated by the dashed lines 860 from the edge node 850 to each of the nodes 810 and 830. The host node 852 implements the Tier-0 gateway logical router 810, the logical switch 830, and the VM 840, as illustrated by the dashed lines 862 from the host node 852 to each of the nodes 810, 830, and 840.
On the right side of the pyramid, the physical elements include a second instance of the edge node 850 and a host node 854. In this example, the edge node 850 appears twice because it implements logical elements on the left and right sides of the pyramid. On the left side, as described above, the edge node 850 implements the logical switch 830 and the Tier-0 gateway logical router 810, while on the right side, the edge node 850 implements Tier-1 gateway logical router 820 as indicated by the dashed line 864. Also, on the right side of the pyramid, the host node 854 implements the logical switch 832 and VM 842, as indicated by the dashed lines 866.
A first tunnel 870 is displayed between the edge node 850 and the host node 852, while a second tunnel 872 is displayed between the edge node 850 and the host node 854. In some embodiments, when a data message is successfully routed, the tunnels 870 and 872 are displayed using a first color (e.g., green), and when a data message is not successfully routed, the tunnels 870 and 872 are displayed using a second color (e.g., red) to indicate the failure.
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 905 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 900. For instance, the bus 905 communicatively connects the processing unit(s) 910 with the read-only memory 930, the system memory 925, and the permanent storage device 935.
From these various memory units, the processing unit(s) 910 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 930 stores static data and instructions that are needed by the processing unit(s) 910 and other modules of the computer system. The permanent storage device 935, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 900 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 935.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 935, the system memory 925 is a read-and-write memory device. However, unlike storage device 935, the system memory is a volatile read-and-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 925, the permanent storage device 935, and/or the read-only memory 930. From these various memory units, the processing unit(s) 910 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 905 also connects to the input and output devices 940 and 945. The input devices enable the user to communicate information and select commands to the computer system. The input devices 940 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 945 display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
202141000964 | Jan 2021 | IN | national |
Number | Name | Date | Kind |
---|---|---|---|
5224100 | Lee et al. | Jun 1993 | A |
5245609 | Ofek et al. | Sep 1993 | A |
5265092 | Soloway et al. | Nov 1993 | A |
5504921 | Dev et al. | Apr 1996 | A |
5550816 | Hardwick et al. | Aug 1996 | A |
5729685 | Chatwani et al. | Mar 1998 | A |
5751967 | Raab et al. | May 1998 | A |
5781534 | Perlman et al. | Jul 1998 | A |
6104699 | Holender et al. | Aug 2000 | A |
6104700 | Haddock et al. | Aug 2000 | A |
6141738 | Munter et al. | Oct 2000 | A |
6219699 | McCloghrie et al. | Apr 2001 | B1 |
6430160 | Smith et al. | Aug 2002 | B1 |
6456624 | Eccles et al. | Sep 2002 | B1 |
6512745 | Abe et al. | Jan 2003 | B1 |
6539432 | Taguchi et al. | Mar 2003 | B1 |
6658002 | Ross et al. | Dec 2003 | B1 |
6680934 | Cain | Jan 2004 | B1 |
6721334 | Ketcham | Apr 2004 | B1 |
6785843 | McRae et al. | Aug 2004 | B1 |
6941487 | Balakrishnan et al. | Sep 2005 | B1 |
6963585 | Pennec et al. | Nov 2005 | B1 |
6999454 | Crump | Feb 2006 | B1 |
7013342 | Riddle | Mar 2006 | B2 |
7062559 | Yoshimura et al. | Jun 2006 | B2 |
7079544 | Wakayama et al. | Jul 2006 | B2 |
7180856 | Breslau et al. | Feb 2007 | B1 |
7197572 | Matters et al. | Mar 2007 | B2 |
7200144 | Terrell et al. | Apr 2007 | B2 |
7209439 | Rawlins et al. | Apr 2007 | B2 |
7243143 | Bullard | Jul 2007 | B1 |
7283473 | Arndt et al. | Oct 2007 | B2 |
7315985 | Gauvin | Jan 2008 | B1 |
7342916 | Das et al. | Mar 2008 | B2 |
7391771 | Orava et al. | Jun 2008 | B2 |
7450598 | Chen et al. | Nov 2008 | B2 |
7463579 | Lapuh et al. | Dec 2008 | B2 |
7478173 | Delco | Jan 2009 | B1 |
7483370 | Dayal et al. | Jan 2009 | B1 |
7555002 | Arndt et al. | Jun 2009 | B2 |
7577131 | Joseph et al. | Aug 2009 | B2 |
7590133 | Hatae et al. | Sep 2009 | B2 |
7602723 | Mandate et al. | Oct 2009 | B2 |
7606260 | Oguchi et al. | Oct 2009 | B2 |
7627692 | Pessi | Dec 2009 | B2 |
7633955 | Saraiya et al. | Dec 2009 | B1 |
7639625 | Kaminsky et al. | Dec 2009 | B2 |
7643488 | Khanna et al. | Jan 2010 | B2 |
7649851 | Takashige et al. | Jan 2010 | B2 |
7706266 | Plamondon | Apr 2010 | B2 |
7710874 | Balakrishnan et al. | May 2010 | B2 |
7729245 | Breslau et al. | Jun 2010 | B1 |
7760735 | Chen et al. | Jul 2010 | B1 |
7764599 | Doi et al. | Jul 2010 | B2 |
7792987 | Vohra et al. | Sep 2010 | B1 |
7802000 | Huang et al. | Sep 2010 | B1 |
7808919 | Nadeau et al. | Oct 2010 | B2 |
7808929 | Wong et al. | Oct 2010 | B2 |
7818452 | Matthews et al. | Oct 2010 | B2 |
7826482 | Minei et al. | Nov 2010 | B1 |
7839847 | Nadeau et al. | Nov 2010 | B2 |
7885276 | Lin | Feb 2011 | B1 |
7936770 | Frattura et al. | May 2011 | B1 |
7937438 | Miller et al. | May 2011 | B1 |
7937492 | Kompella et al. | May 2011 | B1 |
7948986 | Ghosh et al. | May 2011 | B1 |
7953865 | Miller et al. | May 2011 | B1 |
7991859 | Miller et al. | Aug 2011 | B1 |
7995483 | Bayar et al. | Aug 2011 | B1 |
8024478 | Patel | Sep 2011 | B2 |
8027354 | Portolani et al. | Sep 2011 | B1 |
8031606 | Memon et al. | Oct 2011 | B2 |
8031633 | Bueno et al. | Oct 2011 | B2 |
8046456 | Miller et al. | Oct 2011 | B1 |
8054832 | Shukla et al. | Nov 2011 | B1 |
8055789 | Richardson et al. | Nov 2011 | B2 |
8060875 | Lambeth | Nov 2011 | B1 |
8131852 | Miller et al. | Mar 2012 | B1 |
8149737 | Metke et al. | Apr 2012 | B2 |
8155028 | Abu-Hamdeh et al. | Apr 2012 | B2 |
8161270 | Parker et al. | Apr 2012 | B1 |
8166201 | Richardson et al. | Apr 2012 | B2 |
8199750 | Schultz et al. | Jun 2012 | B1 |
8223668 | Allan et al. | Jul 2012 | B2 |
8224931 | Brandwine et al. | Jul 2012 | B1 |
8224971 | Miller et al. | Jul 2012 | B1 |
8254273 | Kaminsky et al. | Aug 2012 | B2 |
8265062 | Tang et al. | Sep 2012 | B2 |
8265075 | Pandey | Sep 2012 | B2 |
8281067 | Stolowitz | Oct 2012 | B2 |
8290137 | Yurchenko et al. | Oct 2012 | B2 |
8306043 | Breslau et al. | Nov 2012 | B2 |
8312129 | Miller et al. | Nov 2012 | B1 |
8339959 | Moisand et al. | Dec 2012 | B1 |
8339994 | Gnanasekaran et al. | Dec 2012 | B2 |
8345558 | Nicholson et al. | Jan 2013 | B2 |
8351418 | Zhao et al. | Jan 2013 | B2 |
8359576 | Prasad et al. | Jan 2013 | B2 |
8456984 | Ranganathan et al. | Jun 2013 | B2 |
8504718 | Wang et al. | Aug 2013 | B2 |
8565108 | Marshall et al. | Oct 2013 | B1 |
8571031 | Davies et al. | Oct 2013 | B2 |
8611351 | Gooch et al. | Dec 2013 | B2 |
8612627 | Brandwine | Dec 2013 | B1 |
8625594 | Safrai et al. | Jan 2014 | B2 |
8625603 | Ramakrishnan et al. | Jan 2014 | B1 |
8625616 | Vobbilisetty et al. | Jan 2014 | B2 |
8644188 | Brandwine et al. | Feb 2014 | B1 |
8645952 | Biswas et al. | Feb 2014 | B2 |
8750288 | Nakil et al. | Jun 2014 | B2 |
8762501 | Kempf et al. | Jun 2014 | B2 |
8806005 | Miri et al. | Aug 2014 | B2 |
8837300 | Nedeltchev et al. | Sep 2014 | B2 |
8838743 | Lewites et al. | Sep 2014 | B2 |
8929221 | Breslau et al. | Jan 2015 | B2 |
9059926 | Akhter et al. | Jun 2015 | B2 |
9197529 | Ganichev et al. | Nov 2015 | B2 |
9226220 | Banks et al. | Dec 2015 | B2 |
9258195 | Pendleton | Feb 2016 | B1 |
9280448 | Farrell et al. | Mar 2016 | B2 |
9282019 | Ganichev et al. | Mar 2016 | B2 |
9344349 | Ganichev et al. | May 2016 | B2 |
9407580 | Ganichev et al. | Aug 2016 | B2 |
9602334 | Benny | Mar 2017 | B2 |
9860151 | Ganichev et al. | Jan 2018 | B2 |
9898317 | Nakil et al. | Feb 2018 | B2 |
10044581 | Russell | Aug 2018 | B1 |
10181993 | Ganichev et al. | Jan 2019 | B2 |
10200306 | Nhu et al. | Feb 2019 | B2 |
10469342 | Lenglet et al. | Nov 2019 | B2 |
10608887 | Jain et al. | Mar 2020 | B2 |
10778557 | Ganichev et al. | Sep 2020 | B2 |
10805239 | Nhu et al. | Oct 2020 | B2 |
20010020266 | Kojima et al. | Sep 2001 | A1 |
20010043614 | Viswanadham et al. | Nov 2001 | A1 |
20020093952 | Gonda | Jul 2002 | A1 |
20020194369 | Rawlins et al. | Dec 2002 | A1 |
20030041170 | Suzuki | Feb 2003 | A1 |
20030058850 | Rangarajan et al. | Mar 2003 | A1 |
20040073659 | Rajsic et al. | Apr 2004 | A1 |
20040098505 | Clemmensen | May 2004 | A1 |
20040186914 | Shimada | Sep 2004 | A1 |
20040267866 | Carollo et al. | Dec 2004 | A1 |
20040267897 | Hill et al. | Dec 2004 | A1 |
20050018669 | Arndt et al. | Jan 2005 | A1 |
20050027881 | Figueira et al. | Feb 2005 | A1 |
20050053079 | Havala | Mar 2005 | A1 |
20050083953 | May | Apr 2005 | A1 |
20050111445 | Wybenga et al. | May 2005 | A1 |
20050120160 | Plouffe et al. | Jun 2005 | A1 |
20050132044 | Guingo et al. | Jun 2005 | A1 |
20050182853 | Lewites et al. | Aug 2005 | A1 |
20050220096 | Friskney et al. | Oct 2005 | A1 |
20050232230 | Nagami et al. | Oct 2005 | A1 |
20060002370 | Rabie et al. | Jan 2006 | A1 |
20060026225 | Canali et al. | Feb 2006 | A1 |
20060028999 | Iakobashvili et al. | Feb 2006 | A1 |
20060029056 | Perera et al. | Feb 2006 | A1 |
20060037075 | Frattura et al. | Feb 2006 | A1 |
20060174087 | Hashimoto et al. | Aug 2006 | A1 |
20060187908 | Shimozono et al. | Aug 2006 | A1 |
20060193266 | Siddha et al. | Aug 2006 | A1 |
20060206655 | Chappell et al. | Sep 2006 | A1 |
20060218447 | Garcia et al. | Sep 2006 | A1 |
20060221961 | Basso et al. | Oct 2006 | A1 |
20060282895 | Rentzis et al. | Dec 2006 | A1 |
20060291388 | Amdahl et al. | Dec 2006 | A1 |
20070050763 | Kagan et al. | Mar 2007 | A1 |
20070055789 | Claise et al. | Mar 2007 | A1 |
20070064673 | Bhandaru et al. | Mar 2007 | A1 |
20070097982 | Wen et al. | May 2007 | A1 |
20070156919 | Potti et al. | Jul 2007 | A1 |
20070260721 | Bose et al. | Nov 2007 | A1 |
20070286185 | Eriksson et al. | Dec 2007 | A1 |
20070297428 | Bose et al. | Dec 2007 | A1 |
20080002579 | Lindholm et al. | Jan 2008 | A1 |
20080002683 | Droux et al. | Jan 2008 | A1 |
20080049614 | Briscoe et al. | Feb 2008 | A1 |
20080049621 | McGuire et al. | Feb 2008 | A1 |
20080049786 | Ram et al. | Feb 2008 | A1 |
20080059556 | Greenspan et al. | Mar 2008 | A1 |
20080071900 | Hecker et al. | Mar 2008 | A1 |
20080086726 | Griffith et al. | Apr 2008 | A1 |
20080112551 | Forbes et al. | May 2008 | A1 |
20080159301 | Heer | Jul 2008 | A1 |
20080240095 | Basturk | Oct 2008 | A1 |
20090010254 | Shimada | Jan 2009 | A1 |
20090100298 | Lange et al. | Apr 2009 | A1 |
20090109973 | Ilnicki | Apr 2009 | A1 |
20090150527 | Tripathi et al. | Jun 2009 | A1 |
20090245138 | Sapsford | Oct 2009 | A1 |
20090249213 | Murase | Oct 2009 | A1 |
20090292858 | Lambeth et al. | Nov 2009 | A1 |
20090327903 | Smith | Dec 2009 | A1 |
20100128623 | Dunn et al. | May 2010 | A1 |
20100131636 | Suri et al. | May 2010 | A1 |
20100188976 | Rahman et al. | Jul 2010 | A1 |
20100214949 | Smith et al. | Aug 2010 | A1 |
20100232435 | Jabr et al. | Sep 2010 | A1 |
20100254385 | Sharma et al. | Oct 2010 | A1 |
20100275199 | Smith et al. | Oct 2010 | A1 |
20100306408 | Greenberg et al. | Dec 2010 | A1 |
20110022695 | Dalal et al. | Jan 2011 | A1 |
20110075664 | Lambeth et al. | Mar 2011 | A1 |
20110085557 | Gnanasekaran et al. | Apr 2011 | A1 |
20110085559 | Chung et al. | Apr 2011 | A1 |
20110085563 | Kotha et al. | Apr 2011 | A1 |
20110128959 | Bando et al. | Jun 2011 | A1 |
20110137602 | Desineni et al. | Jun 2011 | A1 |
20110194567 | Shen | Aug 2011 | A1 |
20110202920 | Takase | Aug 2011 | A1 |
20110261825 | Ichino | Oct 2011 | A1 |
20110299413 | Chatwani et al. | Dec 2011 | A1 |
20110299534 | Koganti et al. | Dec 2011 | A1 |
20110299537 | Saraiya et al. | Dec 2011 | A1 |
20110305167 | Koide | Dec 2011 | A1 |
20110317559 | Kern et al. | Dec 2011 | A1 |
20110317696 | Aldrin et al. | Dec 2011 | A1 |
20120079478 | Galles et al. | Mar 2012 | A1 |
20120159454 | Barham et al. | Jun 2012 | A1 |
20120182992 | Cowart et al. | Jul 2012 | A1 |
20120275331 | Benkö et al. | Nov 2012 | A1 |
20120287791 | Xi et al. | Nov 2012 | A1 |
20130024579 | Zhang et al. | Jan 2013 | A1 |
20130031233 | Feng et al. | Jan 2013 | A1 |
20130041934 | Annamalaisami et al. | Feb 2013 | A1 |
20130044636 | Koponen et al. | Feb 2013 | A1 |
20130054761 | Kempf et al. | Feb 2013 | A1 |
20130058346 | Sridharan et al. | Mar 2013 | A1 |
20130067067 | Miri et al. | Mar 2013 | A1 |
20130125120 | Zhang et al. | May 2013 | A1 |
20130163427 | Beliveau et al. | Jun 2013 | A1 |
20130163475 | Beliveau et al. | Jun 2013 | A1 |
20130332602 | Nakil et al. | Dec 2013 | A1 |
20130332983 | Koorevaar et al. | Dec 2013 | A1 |
20130339544 | Mithyantha | Dec 2013 | A1 |
20140019639 | Ueno | Jan 2014 | A1 |
20140029451 | Nguyen | Jan 2014 | A1 |
20140115578 | Cooper et al. | Apr 2014 | A1 |
20140119203 | Sundaram et al. | May 2014 | A1 |
20140126418 | Brendel et al. | May 2014 | A1 |
20140157405 | Joli et al. | Jun 2014 | A1 |
20140177633 | Manula et al. | Jun 2014 | A1 |
20140195666 | Dumitriu et al. | Jul 2014 | A1 |
20140207926 | Benny | Jul 2014 | A1 |
20140219086 | Cantu′ et al. | Aug 2014 | A1 |
20140281030 | Cui et al. | Sep 2014 | A1 |
20140282823 | Rash et al. | Sep 2014 | A1 |
20140304393 | Annamalaisami et al. | Oct 2014 | A1 |
20150016286 | Ganichev et al. | Jan 2015 | A1 |
20150016287 | Ganichev et al. | Jan 2015 | A1 |
20150016298 | Ganichev et al. | Jan 2015 | A1 |
20150016469 | Ganichev et al. | Jan 2015 | A1 |
20150043378 | Bardgett | Feb 2015 | A1 |
20150180755 | Zhang et al. | Jun 2015 | A1 |
20150263899 | Tubaltsev | Sep 2015 | A1 |
20150281036 | Sun et al. | Oct 2015 | A1 |
20160105333 | Lenglet et al. | Apr 2016 | A1 |
20160119204 | Murasato | Apr 2016 | A1 |
20160149791 | Ganichev et al. | May 2016 | A1 |
20160226741 | Ganichev et al. | Aug 2016 | A1 |
20170317954 | Masurekar | Nov 2017 | A1 |
20170358111 | Madsen | Dec 2017 | A1 |
20180062939 | Kulkarni | Mar 2018 | A1 |
20180102959 | Ganichev et al. | Apr 2018 | A1 |
20180123903 | Holla | May 2018 | A1 |
20180262447 | Nhu | Sep 2018 | A1 |
20180262594 | Nhu | Sep 2018 | A1 |
20180373961 | Wang | Dec 2018 | A1 |
20190109769 | Jain et al. | Apr 2019 | A1 |
20190140931 | Ganichev et al. | May 2019 | A1 |
20200067799 | Lenglet et al. | Feb 2020 | A1 |
20200106744 | Miriyala et al. | Apr 2020 | A1 |
20210029059 | Nhu et al. | Jan 2021 | A1 |
20210051109 | Chitalia | Jan 2021 | A1 |
20210218630 | Lu | Jul 2021 | A1 |
Number | Date | Country |
---|---|---|
1154601 | Nov 2001 | EP |
2002141905 | May 2002 | JP |
2003069609 | Mar 2003 | JP |
2003124976 | Apr 2003 | JP |
2003318949 | Nov 2003 | JP |
9506989 | Mar 1995 | WO |
2012126488 | Sep 2012 | WO |
2013184846 | Dec 2013 | WO |
2015005968 | Jan 2015 | WO |
Entry |
---|
Non-Published commonly Owned U.S. Appl. No. 17/006,845, filed Aug. 30, 2020, 70 pages, VMware, Inc. |
Non-Published commonly Owned U.S. Appl. No. 17/006,846, filed Aug. 30, 2020, 47 pages, VMware, Inc. |
Non-Published commonly Owned U.S. Appl. No. 17/006,847, filed Aug. 30, 2020, 47 pages, VMware, Inc. |
Non-Published commonly Owned U.S. Appl. No. 17/185,824, filed Feb. 25, 2021, 32 pages, VMware, Inc. |
Phaal, Peter, et al., “sFlow Version 5,” Jul. 2004, 46 pages, available at http://www.sflow.org/sflow_version_5.txt. |
Phan, Doantam, et al., “Visual Analysis of Network Flow Data with Timelines and Event Plots,” VizSEC 2007, Month Unknown 2007, 16 pages. |