The present disclosure relates to data center fabric networks.
Data center fabric solutions, such as Leaf-Spine architectures, involve complex routing and load balancing algorithms to send a packet from one node to another in the data center fabric. In fabrics using dynamic load balancing schemes, the same packet flow can take a different path at different times based on the bandwidth it consumes. Traditional packet trace utilities inject a packet to the desired destination, but it may not trace the actual packet flow because the packet hashes may not match.
Presented herein are embodiments for tracing paths of packet flows in a data center fabric network. Filters are configured on nodes (e.g., switches) in the data center fabric network for a particular packet flow. Numerous such filters can be configured on each of the nodes, each filter for a different packet flow. When a filter detects a match, it outputs a log of such occurrence to a network controller. The network controller uses log data sent from the nodes as well as knowledge of the network topology (updated as changes occur in the network) to determine the path for a particular packet flow in the data center fabric network.
Thus, from the perspective of the network controller, a method is provided in which filter configuration information is generated to track a particular packet flow, the filter configuration information including one or more parameters of the particular packet flow. The filter configuration information is sent to the plurality of nodes in order to configure a filter for the particular packet flow at each of the plurality of nodes. The network controller receives from one or more of the plurality of nodes where a filter match occurs output indicating that a packet matching the filter configuration information for the filter for the particular packet flow passed through the associated node. The network controller analyzes the output received from one or more of the plurality of nodes where a filter match occurs to determine a path through the network for the particular packet flow.
There are no techniques available that can accurately trace a specific packet flow in current advanced data center fabric networks. In accordance with embodiments presented herein, packet path tracing can be done “inline” on the actual packet flow itself in data center fabric networks. This avoids the need to inject a new packet. Performing packet path tracing inline can also quickly pinpoint where and why in the network, a packet flow is getting dropped if it is because of a forwarding drop.
The techniques involve using filters in the network switches (data center nodes) and analyzing the filter output with the network topology information to generate (“stitch”) the path of a packet in the network.
Reference is first made to
A network controller 30 is in communication with each of the spine nodes S1, S2 and S3 and with each of the leaf nodes L1, L2, L3 and L4. A user (e.g., a network administrator) can log onto the network controller (locally or remotely via the Internet) from a user terminal 40. The user terminal 40 may be a desktop computer, server, laptop computer, or any computing/user device with network connectivity and a user interface.
The topology of the fabric network can change dynamically as some nodes may go down. With the Link Layer Discovery Protocol (LLDP) always running between the nodes, the current topology is always available at the network controller 30. In addition to the topology change, the nodes may also perform dynamic load balancing techniques to avoid congested paths in the network. Both of these factors can change the path taken by a packet flow.
When a filter hit (match) occurs, output is generated including, among other things, information identifying the incoming interface (port) on the node at which the packet is received on the node where the filter hit occurs. This information is sent from the nodes where the filter hits occur to the network controller 30.
The network controller 30 correlates the nodes at which a filter hit is reported with the network topology of the fabric. As described above, the network topology of the fabric can be obtained from simple link level protocols like LLDP, which publishes all the neighbors of a given switch. By looking up the incoming interface information in an LLDP or other similar database, the network controller 30 can determine the neighbor switch that sent the packet. By deducing this information at every node where the packet is seen, the entire packet path can be determined. Thus, the network controller 30 analyzes the filter hit information, including the incoming interface of the packet on the nodes that hit the filter, against the network topology (obtained for example using LLDP) information to build the entire path of the packet flow in the data center fabric network.
The system depicted in
Reference is now made to
The data center nodes, referred to by reference numerals 20(1), 22(1)-20(N), 22(N), include a plurality of ports 200, one or more network processor Application Specific Integrated Circuit (ASICs) 210, a processor 220 and memory 230. Within the network processor ASICs 210 there are one or more configurable filters 240(1)-240(N), shown as Filter 1-Filter N. These are the filters that the network controller 30 can program/configure on each data center node to track certain packet flows. The network controller 30 sends filter configuration information 250 in order to configure that same filter on each data center node for each packet flow to be tracked. For example, Filter 1 would be configured with appropriate parameters/attributes on each data center node to track packet flow 1, Filter 2 would be configured with appropriate parameters/attributes on each data center node to track packet flow 2, and so on. The data center nodes return to the network controller filter hit information 260. As generally described above, the filter hit information 260 to be logged at the network controller includes the information identifying the incoming interface of the packet at the node where the filter match (“hit”) occurred. The network processor ASICs 210 may be further capable of capturing additional forwarding information like forward or drop packet action, next hop details etc. In one form, the filters are instantiated with Embedded Logic Analyzer Module (ELAM) technology. However, in general, the filters may be implemented using configurable digital logic in the network processor ASICs, or in software stored in the memory and executed by the processor within each data center node.
The memory 130 in the network controller 30 and the memory 230 in the data center nodes may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory shown in
The filter can be based on any field of a packet, e.g., any field in the L2 header, L3 header or L4 header of a packet. Examples of packet fields/attributes that may be used for a packet filter include (but are not limited to):
Source media access control (MAC) address (inner and/or outer depending whether tunneling is used)
Destination MAC address (inner and/or outer depending whether tunneling is used)
Source Internet Protocol (IP) address (inner and/or outer depending whether tunneling is used)
Destination IP address (inner and/or outer depending whether tunneling is used)
Domain name of the node (switch)
Port number
Layer 4 (e.g., Universal Datagram Protocol (UDP) or Transport Control Protocol (TCP) Source Port or Destination Port
Virtual Network Identifier (VNID) for a Virtual Extensible Local Area Network (VxLAN) packet
Virtual Local Area Network (VLAN) identifier
Values for one or more of these fields would be set. If a packet arrives at a node that has a value(s) that matches the value set for a corresponding field in the filter, then a match is declared.
The following is example output that may be generated by filters at nodes in a network. Naming conventions are used for the various nodes, but this is arbitrary.
List of Switches in the network where a particular filter is configured:
The path of the packet determined from data captured from the nodes where matches occurred:
Thus, the network controller 30 receives data output from the filters that had a match, and builds a database from that data. Using the network configuration information stored (and continuously updated) at the network controller 30, the network controller 30 can then build a list indicating the nodes along the path of the packet flow.
Reference is now made to
Furthermore,
Turning now to
At 440, the network controller receives the log data from the filters at nodes where a match occurs. At 450, the network controller analyzes the filter match output with respect to network topology information for the network in order to build a packet path through the network for the packet flow. At 460, the network controller may determine reasons for packet drops, if such drops are determined to occur in the path of a packet flow.
To summarize, presented herein are techniques for a tool that takes a list of nodes (e.g., switches) and packet flow parameters for a particular packet flow in order to trace and produce the packet path for the flow. ELAM packet filters in the network processor ASICs may be used to filter the packet and log the forwarding information, which is sent back to the network controller. In data center fabrics using dynamic load balancing schemes, these techniques give an accurate packet path of a specific packet flow at a given time. This also avoids the need to inject a debug packet like in existing tools. The tool also provides a method to collect forwarding data from all the nodes in the network to quickly debug where and why a packet flow is getting dropped in the network.
Thus, these techniques can determine where a packet flow is getting dropped in the case the receiving node does not receive the packets. The last node where the packets hit the filter is the ‘culprit’ node in the path. If the network processor ASIC of that node is capable of giving the drop reason, then the drop reason can be captured by the filter output, which can help in quick triaging of the problem.
There are many advantages to these techniques. In particular, in dynamic load balancing schemes, the same packet flow can take different paths at different times based on its bandwidth. A traditional traceroute utility cannot inject a packet in the same packet flow, and therefore it cannot help in debugging a specific packet flow if it gets dropped. There are no known utilities that can gather forwarding data from all the nodes where the packet flow was seen, in order to be able to debug any packet flow drops in a fabric network. The techniques presented herein can trace a packet path without needing to send additional debug packets.
In summary, in one form, a method is provided comprising: at a network controller that is communication with a plurality of nodes in a network: generating filter configuration information to track a particular packet flow, the filter configuration information including one or more parameters of the particular packet flow; sending the filter configuration information to the plurality of nodes in order to configure a filter for the particular packet flow at each of the plurality of nodes; receiving from one or more of the plurality of nodes where a filter match occurs output indicating that a packet matching the filter configuration information for the filter for the particular packet flow passed through the associated node; and analyzing the output received from one or more of the plurality of nodes where a filter match occurs to determine a path through the network for the particular packet flow.
In another form, a system is provided comprising: a plurality of nodes in a network, each node including a plurality of ports and one or more network processors that are used to process packets that are received at one of the plurality of ports for routing in the network; a network controller in communication with the plurality of nodes, wherein the network controller is configured to: generate filter configuration information to track a particular packet flow, the filter configuration information including one or more parameters of the particular packet flow; send the filter configuration information to the plurality of nodes in order to configure a filter for the particular packet flow at each of the plurality of nodes; receive from one or more of the plurality of nodes where a filter match occurs output indicating that a packet matching the filter configuration information for the filter for the particular packet flow passed through the associated node; and analyze the output received from one or more of the plurality of nodes where a filter match occurs to determine a path through the network for the particular packet flow.
In still another form, an apparatus is provided comprising: a network interface unit configured to enable communications over a network; a memory; a processor coupled to the network interface unit and the memory, wherein the processor is configured to: generate filter configuration information to track a particular packet flow through a network that includes a plurality of nodes, the filter configuration information including one or more parameters of the particular packet flow; send, via the network interface unit, the filter configuration information to the plurality of nodes in order to configure a filter for the particular packet flow at each of the plurality of nodes; receive, via the network interface unit, from one or more of the plurality of nodes where a filter match occurs output indicating that a packet matching the filter configuration information for the filter for the particular packet flow passed through the associated node; and analyze the output received from one or more of the plurality of nodes where a filter match occurs to determine a path through the network for the particular packet flow.
In yet another form, one or more non-transitory computer readable storage media are provided storing/encoded with instructions that, when executed by a processor, cause the processor to: generate filter configuration information to track a particular packet flow through a network that includes a plurality of nodes, the filter configuration information including one or more parameters of the particular packet flow; cause the filter configuration information to be sent to the plurality of nodes in order to configure a filter for the particular packet flow at each of the plurality of nodes; receive, via the network interface unit, from one or more of the plurality of nodes where a filter match occurs output indicating that a packet matching the filter configuration information for the filter for the particular packet flow passed through the associated node; and analyze the output received from one or more of the plurality of nodes where a filter match occurs to determine a path through the network for the particular packet flow.
The above description is intended by way of example only.
This application claims priority to U.S. Provisional Application No. 62/081,061, filed Nov. 18, 2014, the entirety of which is incorporated herein by reference.
Number | Date | Country | |
---|---|---|---|
62081061 | Nov 2014 | US |