Link aggregation typically has been used to increase the communication bandwidths and fault tolerance capabilities of network devices. With link aggregation, multiple physical links (network cables, for example) between two network devices form a single logical link, or link aggregation group (LAG), which has a larger available bandwidth than any of the individual physical links. Moreover, link aggregation provides for failover, in that should one of the physical links of the LAG fail, communication between the network devices continues using the remaining physical links.
Referring to
For purposes of ensuring that a given server 110 (and its VMs) remain visible and functional for such events as a local network port outage or an access switch outage, the network ports, or interfaces, of a given server 110 (such as network interfaces 150 of the server 110-1, for example) may be connected in one or multiple link aggregation groups(s) (LAG(s)) with network interfaces 163 of one or multiple network switches 162 (of network fabric 160). Thus, the edge of network encroaches beyond the network fabric 160 and onto the server 110 itself. Two or more network interfaces 150 of a given server 110 may be physically connected to network multiple network interfaces 163 of a given network switch 162 (to form a LAG) or may be physically connected to network interfaces 163 of multiple network switches 162 (to form a split LAG (SLAG)).
As a more specific example,
An aggregation is a logical representation of the network interfaces on a given network device (such as a network switch or a server), which are grouped together to form a LAG or SLAG. For a given aggregation, one of the network interfaces is the master, or “aggregator;” and the remaining network interface(s) are the member(s) of the aggregation. Thus, for the example of
Referring back to
For purposes of discovering aggregation information for a given server 110 of the network 100, the network manager 174 generally performs a technique 300 that is depicted in
Referring back to
In accordance with example implementations, the other servers 110 are standalone servers that include such hardware as CPU(s), random access memory (RAM), static storage, NIC interfaces, and so forth.
As depicted in
For the example that is depicted in
Not all of the servers 110 may, however, be SNMP capable or may be partially SNMP capable. In other words, a given server 110 may not be capable of providing certain MIB information, such as the MIB interfaces or ifStack tables. For example, certain MIB information may not be available or may not be loaded onto the server 110. Thus, a given server 110 may provide no or partial MIB information, as further described herein.
In general, traditionally, MIBs for network devices, such as switches and routers, have been available. However, servers running traditional operating systems, such as Windows or Unix, may not provide access to SNMP MIBs that identify port and connection redundancies. In this manner, datacenter operators may not load such MIBs on the servers, and many cases, no SNMP capabilities may not be provided by a given server 110. This makes SNMP-based identification of the server interface aggregations and aggregation connections potentially challenging for the network manager 174. As described herein, the network manager 174 may discover server interface aggregations and switch-to-server interface aggregations for a large number of different operating system-based platforms, such as platforms that run Windows, Linux, Solaris, HPUX and AIX operating systems, as examples.
Referring to
Pursuant to the technique 400, the network manager 174 first determines (decision block 402) whether the server supports SNMP, or is “SNMP capable.” If so, the network manager 174 requests (block 404) the SNMP-based interfaces table and the ifStack table from the server. In general, the interfaces table identifies the logical network interfaces of the server and the corresponding characteristics of these logical interfaces, such as physical addresses, Internet protocol (IP) addresses, aggregation labels, and so forth. The ifStack table identifies a hierarchical relationship among the logical network interfaces of the server.
The server may or may not, however, provide an ifStack table, even if the server is SNMP capable. If the server provides the ifStack table (decision block 406), then the network manager 174 creates (block 408) aggregator-to-member relationships by matching physical member interface indexes to logical aggregator interface indexes, using both the interfaces table and the ifStack table. If, however, the server does not support the ifStack table (decision block 406), then the network manager 174 processes the interfaces table and makes various assumptions based on the information found in this table.
More specifically, in accordance with example implementations, the network manager 174 performs the following technique. First, by examining the interfaces table, the network manager 174 determines (block 410) one or multiple groups of server network interfaces that share a common physical address. In this regard, the network manager 174 considers network interfaces that share a common physical address, such as a MAC address, as belonging to a server aggregation group. Next, the network manager 174 begins a process (by deciding whether there is another group to process in decision block 411) to identify the aggregator for each of the identified aggregation group(s).
More specifically, one way to identify the aggregator for a given group is for the network manager 174 to determine (decision block 412) whether a specific label, or aggregator “iftype” (or “IANAiftype”), is present in the interfaces table. For example, a given logical network interface acting as an aggregator may be of an ifType ieee8023adLag, propMultiplexor or another type. If a given logical network interface is associated with one of these labels, then, in accordance with example implementations, the network manager 174 labels (block 414) the interface as being the aggregator and labels the other logical interface(s) of the aggregation as being member(s) of the aggregation. Control then returns to decision block 412 to process the next group of interfaces.
If in decision block 412 the network manager 174 fails to find the aggregator iftype label (pursuant to decision block 412), then the network manager 174 proceeds to identify the aggregator based on other criteria. More specifically, the network manager 174 determines (decision block 416) whether one of the logical interfaces has an IP address. If so, the network manager 174 labels the interface having the IP address as the aggregator and labels the other interface(s) as member(s), pursuant to block 418. Control then returns to decision block 411 to process the next aggregation, if any.
If, however, pursuant to decision block 416, the network manager 174 does not find an IP address, the network manager 174 applies a different criteria to identify the aggregation. In this manner, the network manger 174 determines (decision block 420) whether one of the logical network interfaces has a different speed. If so, the network manager labels the interface having the different speed as the aggregator and labels the other interface(s) as a member(s) of the aggregation, pursuant to block 422. Control then returns to decision block 411 to process the next aggregation, if any.
Lastly, if the network manager 174 does not, pursuant to decision block 420, determine that one of the logical interfaces has a different speed, then the network manager 174 labels the interface having the highest numbered interface index as the aggregator and labels the other interface(s) as the member(s) of the aggregation, pursuant to block 424. Control then returns to decision block 412.
If the network manager 174 determines (decision block 402) that the server is not SNMP capable, then the interfaces and ifStack tables are not available, and the network manager 174 identifies aggregation information based on other criteria. In this manner, the network manager 174 collects (block 430) address resolution protocol (ARP) information from the switches and routers of the network fabric 160 to pair (block 432) physical addresses of the logical interfaces to IP addresses using the ARP information. By using forward database (FDB) table information, the network manager 174 identifies (block 434) a given server network interface as an aggregator if the server interface communicates with the switch's aggregator interface. It is noted that for this case, the aggregation includes no members other than the aggregator, because the SNMP interface information is not available from the server.
It is noted that each server operating system presents its network interface model using a different SNMP interfaces table representation. For example, Linux is relatively simplistic in responding with a grouped interface aggregation. Windows® is relatively complex offering a relatively large number of logical interfaces as potential aggregator candidates. A Solaris operating system may provide the aggregator interface with no members, while HPUX provides the SNMP interface information closer to Linux. Thus, the network manager 174 may modify its heuristics for server aggregator identification as new operating systems are added to the supported list.
After discovering a given server network interface aggregation, the network manager 174 next discovers the physical, or L2, connections, which physically connect the server interfaces to the switch interfaces. The L2 connections may be discovered from either the server or the switch side. In this regard, different switch-side and server-side aggregations and corresponding connections are formed over the course of time. Thus, the network manager 174 uses both server-side connection discovery (
Among the possible advantages of the systems and techniques that are disclosed herein, server interface aggregations as well as L2 connections are discovered, which are both important for maintaining accessibility of servers. Moreover, server aggregations and L2 connections may be identified even when one or multiple servers do not support SNMP. The systems and techniques that are disclosed herein allow incident alerting when member(s) of a given server aggregation are faulting or faulting when an entire aggregation has faulted. The systems and techniques that are disclosed herein permit incident alerting when the server's aggregate L2 link to its access switch or switches have faulted. Virtual server aggregations may be identified and correctly monitored for operational status, bandwidth utilization, and other monitored conditions. Other and different advantages are contemplated, in accordance with other example implementations.
Other implementations are contemplated and are within the scope of the appended claims. For example, although server-to-switch link aggregations are described in the examples above, the network manager 174 may find heterogeneous network equipment aggregations using the same techniques. For example, a network switch made by a first company may have a link aggregation to a network switch made by a second company, but neither device's SNMP agent may provide standard MIB aggregation responses that contain data identifying the aggregations and allow linking of the aggregations. However, using the above-described techniques, the network manager 174 may find and link aggregations between heterogeneous network equipment. Because network switches tend to support the ifStack MIB, the network manager 174 may use this information to create and link the aggregations, but the other techniques described above may also be used.
Likewise, in accordance with further example implementations, the network manager 174 may use techniques that are disclosed herein to find router-to-switch and router-to-router aggregations.
Thus, referring to
While the present invention has been described with respect to a limited number of embodiments, those skilled in the art, having the benefit of this disclosure, will appreciate numerous modifications and variations therefrom. It is intended that the appended claims cover all such modifications and variations as fall within the true spirit and scope of this present invention.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/US2013/067547 | 10/30/2013 | WO | 00 |
Publishing Document | Publishing Date | Country | Kind |
---|---|---|---|
WO2015/065385 | 5/7/2015 | WO | A |
Number | Name | Date | Kind |
---|---|---|---|
7760631 | Basturk | Jul 2010 | B2 |
8443065 | White | May 2013 | B1 |
9258195 | Pendleton | Feb 2016 | B1 |
20100290473 | Enduri et al. | Nov 2010 | A1 |
20110093579 | Koizumi et al. | Apr 2011 | A1 |
20120093004 | Nishi | Apr 2012 | A1 |
20120131662 | Kuik | May 2012 | A1 |
20120275311 | Ivershen et al. | Nov 2012 | A1 |
20120300773 | Maeda | Nov 2012 | A1 |
20120307828 | Agarwal et al. | Dec 2012 | A1 |
20130259059 | Yamada | Oct 2013 | A1 |
20140064056 | Sakata | Mar 2014 | A1 |
20140185461 | Gautreau | Jul 2014 | A1 |
20150372862 | Namihira | Dec 2015 | A1 |
Entry |
---|
Extreme Networks, Inc., Exploring New Data Center Network Architectures with Multi-switch Link Aggregation (M-LAG), White Paper, Jul. 29, 2011, 5 pages http://www.extremenetworks.com/libraries/whitepapers/WPDCArchitectureswM-LAG1750. |
Hewlett-Packard Company, HP FlexFabric Reference Architecture, Technical White Paper, Aug. 5, 2012, 70 pages http://h20195 www2 hp com/v2/GetPDF aspx/4AA3 4150ENW pdf. |
International Searching Authority, The International Search Report and the Written Opinion, dated Jul. 28, 2014, 9 Pages. |
Netgear Prosafe, ProSafe® 24-port 10 Gigabit Stackable L2+ Managed Switch Data Sheet XSM7224S, Data Sheet, Jan. 19, 2012, 14 pages http://www netgear com/images/XSM7224S DS 19Jan1218-10913 pdf. |
Oracle, System Administration Guide: Network Interfaces and Network Virtualization, 2010, 10 pages http://docs oracle com/cd/E19120-01/open solaris/819-6990/fpjvl/index html. |
Number | Date | Country | |
---|---|---|---|
20160261489 A1 | Sep 2016 | US |