The invention relates generally to data centers and data processing. More particularly, the invention relates to management of a distributed fabric system.
Data centers are generally centralized facilities that provide Internet and intranet services needed to support businesses and organizations. A typical data center can house various types of electronic equipment, such as computers, servers (e.g., email servers, proxy servers, and DNS servers), switches, routers, data storage devices, and other associated components. The infrastructure of the data center, specifically, the layers of switches in the switch fabric, plays a central role in the support of the services. Implementations of data centers can have hundreds and thousands of switch chassis, and the interconnections among the various chassis can be complex and difficult to follow. Moreover, the numerous and intricate interconnections among the various chassis can make problems arising in the data center formidable to troubleshoot.
The invention features a method for managing a distributed fabric system in which a plurality of scaled-out fabric coupler (SFC) chassis is connected to a plurality of distributed line card (DLC) chassis over fabric communication links. The method comprises detecting, by a fabric element chip of each SFC chassis, connectivity between the fabric element chip of that SFC chassis and a fabric interface of a switching chip of one or more of the DLC chassis. In response to the detected connectivity, connection information is stored in memory for each fabric communication link between the fabric element chip of that SFC and a fabric interface of a switching chip of the one or more of the DLC chassis. The memory is accessed to acquire the connection information for each fabric communication link. A topology of the distributed fabric system is constructed from the acquired connection information for each communication link.
The above and further advantages of this invention may be better understood by referring to the following description in conjunction with the accompanying drawings, in which like numerals indicate like structural elements and features in various figures. The drawings are not necessarily to scale, emphasis instead being placed upon illustrating the principles of the invention.
Distributed fabric systems described herein include independent scaled-out fabric coupler (SFC) chassis in communication with a plurality of independent distributed line card (DLC) chassis. The SFC chassis have one or more cell-based fabric element chips that communicate through SFC fabric ports over fabric communication links with fabric interfaces of the switching chips on the DLC chassis. By reachability messaging, each fabric element chip can detect connectivity between an SFC fabric port and a DLC fabric interface, and can do so with high frequency. Advantageously, the applicants recognized that such connectivity information can form the basis of constructing and displaying a topology of the distributed fabric system. From a management station, a network administrator can display this topology graphically, and enhance the topological graph with other information about the communication links, such as link bandwidth and link status. The graphical form of the topology gives the network administrator an encompassing view of the distributed fabric system and a portal through which to manage and modify the topology, for example, by configuring the status of the individual communication links.
The data center 10 includes an SFC chassis 12 in communication with network elements 14, referred to herein as distributed line cards (DLCs) 14. The SFC chassis 12 and DLCs 14 together form a distributed fabric system and correspond to a single cell-switched domain. Although four DLC chassis 14 only are shown, the number of DLC chassis in the cell-switched domain can range in the hundreds and thousands. The DLCs 14 are members of a designated cluster. The data center 10 can have more than one cluster, although each DLC can be the member of one cluster only. The data center 10 may be embodied at a single site or distributed among multiple sites. Although shown outside of the data center 10, either (or both) of the management station 4 and server 6 may be considered part of the data center 10.
In the data center 10, the functionality occurs on three planes: a management plane, a control plane, and a data plane. The management of the cluster, such as configuration management, runtime configuration management, presentation of information (show and display), graph generation, and handling SNMP requests, occurs on the management plane. The control plane is associated with those functions involving network signaling and control protocols. The data plane manages data flow. In the data center 10, the functionality of the management plane and of the control plane is centralized, the management plane and control plane being implemented predominately at the server 6, and the functionality of the data plane is distributed among the DLCs 14 and SFCs 12.
The management station 4 provides a centralized point of administration for managing and controlling the networked switches 12, 14 and the controller 6 of the distributed fabric system. Through the management station 4, a user or network administrator of the data center 10 communicates with the controller 6 in order to manage the cluster, with conceivably hundreds of DLCs, tens of SFCs, and one or more controllers, from a single location. A graphical user interface (GUI) application executing on the management station 4 serves to provide the network administrator with a view of the entire network topology of the distributed fabric system. An example of such a GUI application is Blade Harmony Manager® provided by IBM Corporation of Armonk, N.Y. In brief, the GUI-based application can use the information collected by the fabric element chips of the SFCs to represent an entire distributed fabric system topology in graphical form, as described in more detail below.
In addition, the management station 4 can connect directly (point-to-point) or indirectly to a given DLC 14 of the data center 10 over one of a variety of connections, such as standard telephone lines, digital subscriber line (DSL), asynchronous DSL, LAN or WAN links (e.g., T1, T3), broadband connections (Frame Relay, ATM), and wireless connections (e.g., 802.11(a), 802.11(b), 802.11(g), 802.11(n)). Using a network protocol, such as Telnet or SNMP (Simple Network Management Protocol), the management station 4 can access a command-line interface (CLI) of the control plane server 6 of the whole system for purposes of managing the distributed fabric system and accessing the topology and statistical information collected by the various network switches, as described in more detail below.
In general, the server 6 is a computer (or group of computers) that provides one or more services to the data center 10, examples of which include, but are not limited to, email servers, proxy servers, DNS servers, and a control server running the control plane of the distributed fabric system. To support the control plane functionality of an entire DLC cluster, the server 6 is configured with sufficient processing power (e.g., with multiple processor cores).
Each SFC chassis 12 includes one or more cell-based switch fabric elements (FE) 16 in communication with N SFC fabric ports 18. In this example embodiment, there are at least as many DLC chassis 14 as SFC fabric ports 18 in each SFC chassis 12 in the distributed fabric system. Each fabric element 16 of an SFC chassis 12 switches cells between SFC fabric ports 18 based on destination information in the cell header.
Each DLC chassis 14 has network ports 20, network processors 22-1, 22-2 (also called switching chips), and fabric ports 24. In general, network processors 22 are optimized for packet processing. Each network processor 22 is in communication with every fabric port 24 and with a subset of the network ports 20 (for example, each network processor 22 can switch cells derived from packet traffic received on half the network ports of the DLC). An example implementation of the network processor 24 is the BCM 88650, a 28-port, 10 GbE switch device produced by Broadcom, of Irvine, Calif. The network ports 20 are in communication with the network 8 external to the switched domain, such as the Internet. In one embodiment, each DLC chassis 14 has forty network ports 20, with each of the network ports 20 being configured as a 10 Gbps Ethernet port. The aggregate network bandwidth of the DLC chassis 14 is 400 Gbps.
The distributed fabric system in
The communication link 26 between each DLC fabric port 24 and an SFC fabric port 18 can be a wired connection. Interconnect variants include Direct Attached Cable (DAC) or optical cable. DAC provides five to seven meters of cable length; whereas the optical cable offers up to 100 meters of connectivity within the data center, (standard optical connectivity can exceed 10 km). Alternatively, the communication link 26 can be a direct physical connection (i.e., electrical connectors of the DLC fabric ports 24 physically connect directly to electrical connectors of the SFC fabric ports 18). In one embodiment, each communication link supports 12 SerDes (serializer/deserializer) channels (each channel being comprised of a transmit lane and a receive lane).
During operation of this distributed fabric system, a packet arrives at a network port 20 of one of the DLCs 14. The network processor 22 extracts required information from the packet header and payload to form pre-classification metadata. Using this metadata, the network processor 22 performs table look-ups to find the physical destination port for this packet and other associated actions. With these results and metadata, the network processor 22 creates and appends a proprietary header to the front of the packet. The network processor 22 of the DLC 14 in communication with the network port 20 partitions the whole packet including the proprietary header into smaller cells, and adds a cell header (used in ordering of cells) to each cell. The network processor 22 sends the cells out through the DLC fabric ports 24 to each of the SFCs 12, sending different cells to different SFCs 12. For example, consider an incoming packet with a length of 1600 bits. The receiving network processor 22 of the DLC 14 can split the packet into four cells of 400 bits (before adding header information to those cells). The network processor 22 then sends a different cell to each of the four SFCs 12, in effect, achieving a load balancing of the cells across the SFCs 12.
A cell-based switch fabric element 16 of each SFC 12 receiving a cell examines the header of that cell, determines its destination, and sends the cell out through the appropriate one of the fabric ports 18 of that SFC to the destination DLC 14. The destination DLC 14 receives all cells related to the original packet from the SFCs, reassembles the original packet (i.e., removing the added headers, combining cells), and sends the reassembled packet out through the appropriate one of its network ports 20. Continuing with the previous four-cell example, consider that each SFC determines that the destination DLC is DLC 14-2. Each SFC 12 sends its cell out through its fabric port 18-2 to the DLC 14-2. The DLC 14-2 reassembles the packet from the four received cells (the added headers providing an order in which to combine the cells) and sends the packet out of the appropriate network port 20. The pre-classification header information in the cells determines the appropriate network port.
The full-mesh configuration of
The fabric element chip 16 can collect information about the connectivity and statistical activity on each communication link between the fabric element chip 16 and the fabric ports 24 of the DLCs 14. Such information includes, but is not limited to, the status and bandwidth of each lane carried by the communication link in addition to various statistics related to cell transmission and receipt and detected errors. This information is considered precise and reliable, and can be used to build the topology of the distributed fabric system. The fabric element chip 16 stores the collected information in one or more tables.
The SFC chassis 12 further includes a processor 25 in communication with memory 27. Stored in the memory 27 are local software agent 28, an SDK (software development kit) 30 associated with the fabric element chip 16, and an API layer 31 by which to communicate with the SDK 30. Through the SDK 30 and SDK APIs 31, the local software agent 28 can access each table in which the fabric element chip 16 has stored the collected connectivity and/or statistical information. The execution of the local software agent 28 can occur on demand.
The fabric interface 32 of each network processor 22 includes a SerDes (not shown) that preferably provides twenty-four SerDes channels 40. The SerDes includes a pair of functional blocks used to convert data between serial and parallel interfaces in each direction. In one embodiment, each SerDes channel 40 operates at a 10.3 Gbps bandwidth; the aggregate bandwidth of the twenty-four channels being approximately 240 Gbps (or 480 Gbps when taking both fabric interfaces 32). In another embodiment, each SerDes channel 40 operates at approximately 25 Gbps. The twenty-four SerDes channels 40 are grouped into four sets of six channels each.
The DLC 14 further includes PHYs 42-1, 42-2, 42-3, 42-4 (generally 42) in communication with the four (e.g., standard IB CXP) fabric ports 24-1, 24-2, 24-3, 24-4, respectively, of the DLC 14. Each of the PHYs 42 is also in communication with a group of six SerDes channels 40 from each of the two network processors 22-1, 22-2 (thus, each of the PHYs 42 supports twelve SerDes channels 40). In one embodiment, each PHY 42 is a 3×40 G PHY.
Preferably, each fabric port 24 of the DLC 14 includes a 120 Gbps CXP interface. In one embodiment, the CXP interface has twelve transmit and twelve receive SerDes lanes (12×) in a single form factor, each lane providing a 10 Gbps bandwidth. A description of the 120 Gbps 12× CXP interface can be found in the “Supplement to InfiniBand™ Architecture Specification Volume 2 Release 1.2.1”, published by the InfiniBand™ Trade Association. This embodiment of 12-lane CXP is referred to as the standard InfiniBand (IB) CXP. In another embodiment, the CXP interface has 10 lanes (10×) for supporting 10-lane applications, such as 100 Gigabit Ethernet. This embodiment of 10-lane CXP is referred to as the Ethernet CXP.
Like the fabric element chips 16 of the SFCs, the network processor chips 22 can collect information about statistics related to activity at the fabric ports of the DLCs. Such information includes, but is not limited to, statistics about the health, usage, errors, and bandwidth of individual lanes of the each DFC fabric port 24. The network processor chips 22 can store the collected information in one or more tables. When executed, the local software agent 50 accesses each table through the API layer 54 and SDK layer 52. Such execution can occur on demand.
In general, the central software agent 56 gathers the information collected by each of the SFCs 12 in the distributed fabric system and creates the topology of the distributed fabric system. In
The fabric element chip 16 can also collect (step 76) per-lane statistics about the health, usage, errors, bandwidth of individual lanes of each SFC fabric port 18. Individual lane statistics collected during a collection period include, but are not limited to, total cells received, total cells transmitted, total unicast cells, total multicast cells, total broadcast cells, total number of control cells of various types, statistics per priority queues. Error statistics for individual lanes during a measurement period include, but are not limited to, cell errors received, cell errors transmitted, PLL (phase-locked loop) errors, cell header errors on received cells, various types of local buffer overflows, 1-bit parity errors, and multiple bit parity errors. The fabric element chip 16 stores the collected statistics in the memory (e.g., in table form with or separate from topology information). The fabric element chip 16 can also perform per-lane diagnostics, such as tuning and testing the analog-signal attributes (e.g., amplitude and signal pre-emphasis) of each lane.
Concurrent with the operation of the SFC fabric element chip 16, the DLC fabric interface 32 of the network processor chip 22 also collects (step 78) per-lane statistics of cells received or transmitted, including error statistics, over each communication link 26.
Through the API layer 30 of the SDK 31, the local software agent 28 running on the SFC chassis 12 can access (step 80) those connectivity tables produced by the fabric element chip 16 and the individual lane statistics collected by the fabric element chip 16. Similarly, through the SDK 50 and API layer 52, the local software agent 54 running on the DLC 14 accesses (step 82) the individual lane statistics collected by fabric interface 32 of the network processor chip 22. The collection of the information by the local software agents 28, 54 can occur at predefined or dynamically set intervals.
The local software agents 28, 54 running on the SFC 12 and DLC 14, respectively, forward (step 84) the connectivity and statistics information to the central software agent 56 that is running on the master DLC (or, alternatively, on a server (e.g., server 6) connected to the data center). This information is for building the topology of the distributed fabric system and to provide, on demand, detailed statistics of every lane on all the ports.
In response to the connectivity information received from the SFCs, the central software agent 56 generates (step 86) a connectivity graph representing the topology of the distributed fabric system. This graph precisely depicts all the DLCs and SFCs in the distributed fabric system with their interconnectivity. In addition to the topological information and various cell statistics for each lane, the central software agent 56 has the bandwidth of the links, oversubscription factors, traffic distribution, and other details. The connectivity graph can also show the bandwidth (and/or such other information) of all the interconnected links 26. Further, because the fabric element chips 16 update their connectivity matrix with high frequency, the central software agent 56 can frequently update the global connectivity topology of the distributed fabric system to show the link status (for example) along with the changes in the topology.
A network administrator from the management station 4 can connect (step 88) to the device running the central software agent 56 and request the collected and updated information. In response to the request, a GUI-based application running on the management station 4 displays (step 90) the connectivity graph to present a latest view, in graphical form, of the topology of the entire distributed fabric system. The latest view can include the bandwidth and link status of each communication link 26 between each SFC and each DLC.
The graphical view of the entire network topology of the complex distributed network system advantageously facilitates management of the distributed fabric system, fault diagnoses, and debugging. A network administrator can interact (step 92) with the graphical view of the distributed fabric system to control the topology of the system by controlling the status of links between SFCs and DLCs. The on-demand display of the statistics on a per lane, per SFC fabric port, per DLC basis with respect to each SFC and individual fabric element chips simplifies the troubleshooting of problems that arise in the distributed fabric system by pinpointing the affected links.
As described previously, this mapped information can be accessed through the SDK 31 and API layer 30 of the fabric element chip 16 and used to construct and display a graph representing the topology of the distributed fabric system, along with the status of each link and their respective bandwidths.
As will be appreciated by one skilled in the art, aspects of the present invention may be embodied as a system, method, and computer program product. Thus, aspects of the present invention may be embodied entirely in hardware, entirely in software (including, but not limited to, firmware, program code, resident software, microcode), or in a combination of hardware and software. All such embodiments may generally be referred to herein as a circuit, a module, or a system. In addition, aspects of the present invention may be in the form of a computer program product embodied in one or more computer readable media having computer readable program code embodied thereon.
Any combination of one or more computer readable medium(s) may be utilized. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wired, optical fiber cable, radio frequency (RF), etc. or any suitable combination thereof.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as JAVA, Smalltalk, C++, and Visual C++ or the like and conventional procedural programming languages, such as the C and Pascal programming languages or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the latter scenario, the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
The program code may execute entirely on a user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on a remote computer or server. Any such remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
Aspects of the present invention are described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems that perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Aspects of the described invention may be implemented in one or more integrated circuit (IC) chips manufactured with semiconductor-fabrication processes. The maker of the IC chips can distribute them in raw wafer form (on a single wafer with multiple unpackaged chips), as bare die, or in packaged form. When in packaged form, the IC chip is mounted in a single chip package, for example, a plastic carrier with leads affixed to a motherboard or other higher level carrier, or in a multichip package, for example, a ceramic carrier having surface and/or buried interconnections. The IC chip is then integrated with other chips, discrete circuit elements, and/or other signal processing devices as part of either an intermediate product, such as a motherboard, or of an end product. The end product can be any product that includes IC chips, ranging from electronic gaming systems and other low-end applications to advanced computer products having a display, an input device, and a central processor.
Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the invention. The embodiments were chosen and described in order to best explain the principles of the invention and the practical application, and to enable others of ordinary skill in the art to understand the invention for various embodiments with various modifications as are suited to the particular use contemplated.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It is be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
The corresponding structures, materials, acts, and equivalents of all means or step plus function elements in the claims below are intended to include any structure, material, or act for performing the function in combination with other claimed elements as specifically claimed. The description of the present invention has been presented for purposes of illustration and description, but is not intended to be exhaustive or limited to the invention in the form disclosed.
While the invention has been shown and described with reference to specific preferred embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the following claims.
This application is a continuation application claiming priority to and the benefit of the filing date of U.S. patent application Ser. No. 13/414,677, filed Mar. 7, 2012, titled “Management of a Distributed Fabric System,” the contents of which are incorporated by reference herein in their entirety.
Number | Name | Date | Kind |
---|---|---|---|
5226120 | Brown et al. | Jul 1993 | A |
5522042 | Fee et al. | May 1996 | A |
5751967 | Raab et al. | May 1998 | A |
6205122 | Sharon et al. | Mar 2001 | B1 |
6597689 | Chiu et al. | Jul 2003 | B1 |
6856591 | Ma et al. | Feb 2005 | B1 |
6880086 | Kidder et al. | Apr 2005 | B2 |
6934749 | Black et al. | Aug 2005 | B1 |
7020696 | Perry et al. | Mar 2006 | B1 |
7095744 | Iny | Aug 2006 | B2 |
7133403 | Mo et al. | Nov 2006 | B1 |
7143153 | Black et al. | Nov 2006 | B1 |
7225244 | Reynolds et al. | May 2007 | B2 |
7230917 | Fedorkow et al. | Jun 2007 | B1 |
7240364 | Branscomb et al. | Jul 2007 | B1 |
7263597 | Everdell et al. | Aug 2007 | B2 |
7266595 | Black et al. | Sep 2007 | B1 |
7295566 | Chiu et al. | Nov 2007 | B1 |
7299290 | Karpoff | Nov 2007 | B2 |
7305492 | Bryers et al. | Dec 2007 | B2 |
7349960 | Pothier et al. | Mar 2008 | B1 |
7369540 | Giroti | May 2008 | B1 |
7406038 | Oelke et al. | Jul 2008 | B1 |
7441154 | Klotz et al. | Oct 2008 | B2 |
7492779 | Schzukin et al. | Feb 2009 | B2 |
7693976 | Perry et al. | Apr 2010 | B2 |
7765328 | Bryers et al. | Jul 2010 | B2 |
7818387 | King et al. | Oct 2010 | B1 |
7827248 | Oyadomari et al. | Nov 2010 | B2 |
7921686 | Bagepalli et al. | Apr 2011 | B2 |
8194534 | Pandey et al. | Jun 2012 | B2 |
8265071 | Sindhu et al. | Sep 2012 | B2 |
8335213 | Sindhu et al. | Dec 2012 | B2 |
8340088 | Sindhu et al. | Dec 2012 | B2 |
8345675 | Raghunath | Jan 2013 | B1 |
8358660 | Pacella et al. | Jan 2013 | B2 |
8477730 | Rajagopalan et al. | Jul 2013 | B2 |
8537829 | Mehta | Sep 2013 | B2 |
8687629 | Kompella et al. | Apr 2014 | B1 |
8773999 | Campbell et al. | Jul 2014 | B2 |
8780931 | Anantharam et al. | Jul 2014 | B2 |
8789164 | Kamble et al. | Jul 2014 | B2 |
20020001307 | Nguyen et al. | Jan 2002 | A1 |
20020057018 | Branscomb et al. | May 2002 | A1 |
20020080780 | McCormick et al. | Jun 2002 | A1 |
20020116485 | Black et al. | Aug 2002 | A1 |
20020141427 | McAlpine | Oct 2002 | A1 |
20020165961 | Everdell et al. | Nov 2002 | A1 |
20030120822 | Langrind et al. | Jun 2003 | A1 |
20030126195 | Reynolds et al. | Jul 2003 | A1 |
20030169748 | Weyman et al. | Sep 2003 | A1 |
20040031030 | Kidder et al. | Feb 2004 | A1 |
20040119735 | Subbarao et al. | Jun 2004 | A1 |
20050063354 | Garnett et al. | Mar 2005 | A1 |
20050089054 | Ciancaglini et al. | Apr 2005 | A1 |
20050105538 | Perera et al. | May 2005 | A1 |
20050135357 | Riegel et al. | Jun 2005 | A1 |
20050141499 | Ma et al. | Jun 2005 | A1 |
20050141523 | Yeh et al. | Jun 2005 | A1 |
20050198247 | Perry et al. | Sep 2005 | A1 |
20050198373 | Saunderson et al. | Sep 2005 | A1 |
20060092832 | Santoso et al. | May 2006 | A1 |
20060098672 | Schzukin et al. | May 2006 | A1 |
20070083528 | Matthews et al. | Apr 2007 | A1 |
20070121499 | Pal et al. | May 2007 | A1 |
20070136458 | Boyd et al. | Jun 2007 | A1 |
20070147279 | Smith et al. | Jun 2007 | A1 |
20070266384 | Labrou et al. | Nov 2007 | A1 |
20080170578 | Ould-Brahim | Jul 2008 | A1 |
20080275975 | Pandey et al. | Nov 2008 | A1 |
20090059957 | Bagepalli et al. | Mar 2009 | A1 |
20090129398 | Riegel et al. | May 2009 | A1 |
20090157844 | Fionda et al. | Jun 2009 | A1 |
20090157884 | Anderson et al. | Jun 2009 | A1 |
20090198836 | Wittenschlaeger | Aug 2009 | A1 |
20090228418 | Ramesh et al. | Sep 2009 | A1 |
20100061240 | Sindhu et al. | Mar 2010 | A1 |
20100061241 | Sindhu et al. | Mar 2010 | A1 |
20100061242 | Sindhu et al. | Mar 2010 | A1 |
20100061367 | Sindhu et al. | Mar 2010 | A1 |
20100061389 | Sindhu et al. | Mar 2010 | A1 |
20100061391 | Sindhu et al. | Mar 2010 | A1 |
20100061394 | Sindhu et al. | Mar 2010 | A1 |
20100162036 | Linden et al. | Jun 2010 | A1 |
20100169446 | Linden et al. | Jul 2010 | A1 |
20100182934 | Dobbins et al. | Jul 2010 | A1 |
20100214949 | Smith et al. | Aug 2010 | A1 |
20100303086 | Bialkowski | Dec 2010 | A1 |
20100315972 | Plotnik et al. | Dec 2010 | A1 |
20110047467 | Porter | Feb 2011 | A1 |
20110093574 | Koehler et al. | Apr 2011 | A1 |
20110103259 | Aybay et al. | May 2011 | A1 |
20110116376 | Pacella et al. | May 2011 | A1 |
20110179315 | Yang | Jul 2011 | A1 |
20110228669 | Lei et al. | Sep 2011 | A1 |
20110238816 | Vohra et al. | Sep 2011 | A1 |
20120002670 | Subramanian et al. | Jan 2012 | A1 |
20120020373 | Subramanian et al. | Jan 2012 | A1 |
20120063464 | Mehta | Mar 2012 | A1 |
20120155453 | Vohra et al. | Jun 2012 | A1 |
20120170548 | Rajagopalan et al. | Jul 2012 | A1 |
20120287926 | Anantharam et al. | Nov 2012 | A1 |
20120294314 | Campbell et al. | Nov 2012 | A1 |
20120297103 | Kamble et al. | Nov 2012 | A1 |
20120324442 | Barde | Dec 2012 | A1 |
20130060929 | Koponen et al. | Mar 2013 | A1 |
20130064102 | Chang et al. | Mar 2013 | A1 |
20130088971 | Anantharam et al. | Apr 2013 | A1 |
20130089089 | Kamath et al. | Apr 2013 | A1 |
20130103817 | Koponen et al. | Apr 2013 | A1 |
20130107709 | Campbell et al. | May 2013 | A1 |
20130107713 | Campbell et al. | May 2013 | A1 |
20130142196 | Cors et al. | Jun 2013 | A1 |
20130201873 | Anantharam et al. | Aug 2013 | A1 |
20130201875 | Anantharam et al. | Aug 2013 | A1 |
20130235735 | Anantharam et al. | Sep 2013 | A1 |
20130235762 | Anantharam et al. | Sep 2013 | A1 |
20130235763 | Anantharam et al. | Sep 2013 | A1 |
20130242999 | Kamble et al. | Sep 2013 | A1 |
20130247168 | Kamble et al. | Sep 2013 | A1 |
20130315233 | Kamble et al. | Nov 2013 | A1 |
20130315234 | Kamble et al. | Nov 2013 | A1 |
20140064105 | Anantharam et al. | Mar 2014 | A1 |
Number | Date | Country |
---|---|---|
101098260 | Jan 2008 | CN |
102082690 | Jun 2011 | CN |
200971619 | Apr 2009 | JP |
2009542053 | Nov 2009 | JP |
Entry |
---|
Kandalla et al., “Designing Topology-Aware Collective Communication Algorithms for Large Scale Infiniband Clusters: Case Studies with Scatter and Gather”, IEEE International Symposium on Parallel & Distributed Processing, Workshops and PhD Forum, 2010; 8 pages. |
Coti et al., “MPI Applications on Grids: a Topology Aware Approach”, Euro-Par, Paralell Processing, Lecture Notes in Computer Science, 2009, University of Paris, Orsay, France; 12 pages. |
Allen, D., “From the Data Center to the Network: Virtualization bids to remap the LAN”, Network Magazine, vol. 19, No. 2, Feb. 2004; 5 pages. |
Lawrence et al., “An MPI Tool for Automatically Discovering the Swtich Level Topologies of Ethernet Clusters”, IEEE International Symposium on Paralell and Distributed Processing, Apr. 2008, Miami, FL; 8 pages. |
Malavalli, Kumar, “High Speed Fibre Channel Switching Fabric Services”, SPIE Conference on High-Speed Fiber Networks and Channels, Boston, MA (USA), Sep. 1991; 11 pages. |
Aureglia, JJ, et al., “Power Backup for Stackable System”, IP.com, Dec. 1, 1995, 5 pages. |
Ayandeh, Siamack, “A Framework for Benchmarking Performance of Switch Fabrics”, 10th Annual International Conference on Telecommunication, IEEE Conference Proceedings, vol. 2, 2003; pp. 1650-1655. |
Brey et al., “BladeCenter Chassis Management”, IBM J. Res. & Dev. vol. 49, No. 6, Nov. 2005; pp. 941-961. |
Cisco, “Chapter 4: Switch Fabric”, Cisco CRS Carrier Routing System 16-Slot Line Card Chassis System Description, Cisco.com, accessed Jan. 2012; 6 pages. |
“Control Plane Scaling and Router Virtualization”, Juniper Networks, 2010; 12 pages. |
IBM, “A Non-invasive Computation-assist Device for Switching Fabrics in a High-Performance Computing System”, IP.com, Aug. 30, 2007, 5 pages. |
Ni, Lionel M., et al., “Switches and Switch Interconnects”, Michigan State University, Jun. 1997; 8 pages. |
Rogerio, Drummond, “Impact of Communication Networks on Fault-Tolerant Distributed Computing”, IP.com, Apr. 20, 1986, 53 pages. |
Tate, J. et al., “IBM b-type Data Center Networking: Design and Best Practices Introduction”, IBM Redbooks; Dec. 2010; 614 pages. |
Teow, K., “Definitions of Managed Objects for the Fabric Element in Fibre Channel Standard (RFC2837)”, IP.com, May 1, 2000; 41 pages. |
Non-Final Office Action in related U.S. Appl. No. 13/364,896, mailed on Nov. 21, 2013; 33 pages. |
Non-Final Office Action in related U.S. Appl. No. 13/453,644, mailed on Nov. 25, 2013; 16 pages. |
Takafumi Hamano et al., Packet forwarding control functions of an Open Architecture Router, IEICE Technical Report, Jun. 16, 2005, vol. 105, No. 127, p. 45-48. (Translation of abstract only.). |
Shunsuke Fujita et al., Effect of application order of topology inference rule for Layer2 Network Topology visualizing System, IPSJ SIG Technical Reports, Feb. 26, 2009, vol. 2009, No. 21, p. 185-190. (Translation of abstract only.). |
International Search Report and Written Opinion in related international application No. PCT/IB2013/051339, mailed on May 21, 2013; 8 pages. |
Non-Final Office Action in related U.S. Appl. No. 13/414,677, mailed on May 20, 2014; 14 pages. |
Final Office Action in related U.S. Appl. No. 13/453,644, mailed on May 21, 2014; 25 pages. |
Final Office Action in related U.S. Appl. No. 13/364,896, mailed on Jun. 4, 2014; 34 pages. |
Non-Final Office Action in related U.S. Appl. No. 13/414,684, mailed on Jun. 4, 2014; 21 pages. |
Non-Final Office Action in related U.S. Appl. No. 14/072,941, mailed on Jul. 31, 2014; 19 pages. |
Final Office Action in related U.S. Appl. No. 13/453,644, mailed on Sep. 20, 2014; 26 pages. |
Examination Report in related United Kingdom Patent Application No. 1412787.2, mailed on Aug. 26, 2014; 3 pages. |
Notice of Allowance in related U.S. Appl. No. 13/414,677, mailed on Oct. 24, 2014; 9 pages. |
Notice of Allowance in related U.S. Appl. No. 13/414,684, mailed on Oct. 24, 2014; 9 pages. |
Notice of Allowance in related U.S. Appl. No. 13/646,378, mailed on Nov. 28, 2014; 7 pages. |
Notice of Allowance in related U.S. Appl. No. 13/414,684, mailed on Dec. 3, 2014; 9 pages. |
Non-Final Office Action in related U.S. Appl. No. 13/646,378, mailed on Apr. 15, 2014; 23 pages. |
Notice of Allowance in related U.S. Appl. No. 13/646,378, mailed on Oct. 9, 2014; 8 pages. |
Non-Final Office Action in related U.S. Appl. No. 13/364,896, mailed on Nov. 19, 2014; 45 pages. |
Notice of Allowance & Fees Due in related U.S. Appl. No. 13/414,677, mailed on Dec. 3, 2014; 10 pages. |
International Search Report & Written Opinion in related international patent application No. PCT/IB2013/050428, mailed on Jun. 20, 2013; 9 pages. |
Number | Date | Country | |
---|---|---|---|
20130235763 A1 | Sep 2013 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 13414677 | Mar 2012 | US |
Child | 13454987 | US |