Some embodiments described herein relate generally to distributed switch fabric systems, and, in particular, to automatically provisioning resources and transmitting forwarding-state information in a distributed switch fabric system.
Some known networking systems use a targeted routing protocol to distribute forwarding-state information between different nodes within the networking system. Such known networking systems, however, do not automatically provision the nodes of the network system. Similarly stated, such known networking systems do not automatically provide identifiers and/or addresses of each node to the other nodes within the networking system. Accordingly, to transmit forwarding-state information between the nodes within the networking system, a system administrator manually configures each node within the networking system with the addresses and/or identifiers of the remaining nodes within the networking system.
In networking systems having a large number of nodes and/or in networking systems in which the topology frequently changes, manually configuring each node within the system can be time and/or labor intensive. Additionally, errors can be accidentally input into a configuration file by the system administrator during manual configuration.
Accordingly, a need exists for apparatus and methods to automatically provision a switch fabric system such that the nodes within the switch fabric system can exchange forwarding-state information using a targeted protocol.
In some embodiments, a network management module is operatively coupled to a set of edge devices that are coupled to a set of peripheral processing devices. The network management module can receive a signal associated with a broadcast protocol from an edge device from the set of edge devices in response to that edge device being operatively coupled to a switch fabric. The network management module can provision that edge device in response to receiving the signal. The network management module can define multiple network control entities at the set of edge devices such that each network control entity from the multiple network control entities can provide forwarding-state information associated with at least one peripheral processing device from the set of peripheral processing devices to at least one remaining network control entity from the multiple network control entities using a selective protocol.
In some embodiments, a network management module is operatively coupled to a set of edge devices coupled to a set of peripheral processing devices. The network management module can receive a signal associated with a broadcast protocol from an edge device from the set of edge devices in response to that edge device being operatively coupled to a switch fabric. The network management module can provision that edge device in response to receiving the signal. The network management module can define multiple network control entities at the set of edge devices such that each network control entity from the multiple network control entities can provide forwarding-state information associated with at least one peripheral processing device from the set of peripheral processing devices to at least one remaining network control entity from the multiple network control entities using a selective protocol.
By automatically provisioning each edge device using a broadcast protocol, an identifier and/or address associated with each network control entity can be automatically provided to the other network control entities within a switch fabric system. Accordingly, each network control entity within the switch fabric system can provide forwarding-state information to other network control entities within the switch fabric system without a system operator and/or administrator manually configuring the network control entities as peers. For example, Intermediate System-to-Intermediate System (IS-IS) can be used with Type Length Value (TLV) fields to configure the network control entities as Border Gateway Protocol (BGP) peers. BGP-format messages can then be used to transmit the forwarding-state information between the network control entities.
In some embodiments, a non-transitory processor-readable medium stores code representing instructions to cause a processor to send a first signal indicating that an edge device has been operatively coupled to a switch fabric system defining multiple virtual switch fabric systems. The first signal is based on a broadcast protocol. The code represents instructions to cause the processor to receive a second signal from a network management module. The second signal causes the edge device to initiate a first network control entity at the edge device. The second signal assigns to the first network control entity a device identifier and a virtual switch fabric system identifier associated with a virtual switch fabric system from the multiple virtual switch fabric systems. The first network control entity manages at least a portion of the edge device. The code represents instructions to cause the processor to send, using the first network control entity, forwarding-state information associated with a peripheral processing device operatively coupled to the edge device to a second network control entity associated with the virtual switch fabric system, using a selective protocol.
In some embodiments, a switch fabric system includes a set of edge devices associated with a network and operatively coupled to a switch fabric and multiple peripheral processing devices. A first edge device from the set of edge devices can send a broadcast signal to a set of devices associated with the network when the first edge device is initially coupled to the network. A network management module can automatically provision the first edge device from the set of edge devices in response to receiving the broadcast signal. The network management module defines a first network control entity at the first edge device from the set of edge devices and a second network control entity at a second edge device from the set of edge devices. A first set of peripheral processing devices from the multiple peripheral processing devices is associated with the first network control entity, and a second set of peripheral processing devices from the multiple peripheral processing devices is associated with the second network control entity. The first network control entity sends forwarding-state information associated with the first set of peripheral processing devices to the second network control entity using a selective protocol.
Embodiments shown and described herein are often discussed in reference to multiple layers (e.g., data link layer, network layer, physical layer, application layer, etc.). Such layers can be defined by open systems interconnection (OSI) model. Accordingly, the physical layer can be a lower level layer than the data link layer. Additionally, the data link layer can be a lower level layer than the network layer and the application layer. Further, different protocols can be associated with and/or implemented at different layers within the OSI model. For example, an Ethernet protocol, a Fibre Channel protocol and/or a cell-based protocol (e.g., used within a data plane portion of a communications network) can be associated with and/or implemented at a data link layer, while a Border Gateway Protocol (BGP) can be associated with and/or implemented at a higher layer, such as, for example, an application layer. While BGP can be implemented at the application layer, it can be used, for example, to send forwarding-state information used to populate a routing table associated with a network layer.
As used herein, the term “physical hop” can include a physical link between two modules and/or devices. For example, a communication path operatively coupling a first module with a second module can be said to be a physical hop. Similarly stated, a physical hop can physically link the first module with the second module.
As used herein, the term “single physical hop” can include a direct physical connection between two modules and/or devices in a system. Similarly stated, a single physical hop can include a link via which two modules are coupled without intermediate modules. Accordingly, for example, if a first module is coupled to a second module via a single physical hop, the first module can send data packets directly to the second module without sending the data packets through intervening modules.
As used herein, the tell “single logical hop” means a physical hop and/or group of physical hops that are a single hop within a network topology associated with a first protocol (e.g., a first data link layer protocol). Similarly stated, according to the network topology associated with the first protocol, no intervening nodes exist between a first module and/or device operatively coupled to a second module and/or device via the physical hop and/or the group of physical hops. A first module and/or device connected to a second module and/or device via a single logical hop can send a data packet to the second module and/or device using a destination address associated with the first protocol and the second module and/or device, regardless of the number of physical hops between the first device and the second device. In some embodiments, for example, a second protocol (e.g., a second data link layer protocol) can use the destination address of the first protocol (e.g., the first data link layer protocol) to route a data packet and/or cell from the first module and/or device to the second module and/or device over the single logical hop. Similarly stated, when a first module and/or device sends data to a second module and/or device via a single logical hop of a first protocol, the first module and/or device treats the single logical hop as if it is sending the data directly to the second module and/or device. In some embodiments, for example, the first protocol can be a packet-based data link layer protocol (i.e., that transmits variable length data packets and/or frames) and the second protocol can be a cell-based data link layer protocol (i.e., that transmits fixed length data cells and/or frames).
In some embodiments, a switch fabric can function as part of a single logical hop (e.g., a single large-scale consolidated layer-2 (L2)/layer-3 (L3) switch). Portions of the switch fabric can be physically distributed across, for example, many chassis and/or modules interconnected by multiple physical hops. In some embodiments, for example, a processing stage of the switch fabric can be included in a first chassis and another processing stage of the switch fabric can be included in a second chassis. Both of the processing stages can logically function as part of a single consolidated switch (e.g., within the same logical hop according to a first protocol) but include a separate single physical hop between respective pairs of processing stages. Similarly stated, each stage within a switch fabric can be connected to adjacent stage(s) by physical links while operating collectively as a single logical hop associated with a protocol used to route data outside the switch fabric. Additionally, packet classification and forwarding associated with a protocol (e.g., Ethernet) used to route data outside a single logical hop need not occur at each stage within the single logical hop. In some embodiments, for example, packet classification and forwarding associated with a first protocol (e.g., Ethernet) can occur prior to a module and/or device sending the data packet to another module and/or device via the single logical hop.
As used in this specification, the singular forms “a,” “an” and “the” include plural referents unless the context clearly dictates otherwise. Thus, for example, the term “a module” is intended to mean a single module or a combination of modules.
The peripheral processing devices 114, 124, 134 can be operatively coupled to the edge devices 182, 184, 186 of the switch fabric system 100 using any suitable connection such as, for example, an optical connection (e.g., an optical cable and optical connectors), an electrical connection (e.g., an electrical cable and electrical connectors) and/or the like. As such, the peripheral processing devices 114, 124, 134 can send data (e.g., data packets, data cells, etc.) to and receive data from the switch fabric system 100 via the edge devices 182, 184, 186. In some embodiments, the connection between the peripheral processing devices 114, 124, 134 and the edge devices 182, 184, 186 is a direct link. Such a link can be said to be a single physical hop link. In other embodiments, the peripheral processing devices can be operatively coupled to the edge devices via intermediate modules. Such a connection can be said to be a multiple physical hop link.
Each edge device 182, 184, 186 can be any device that operatively couples peripheral processing devices 114, 124, 134 to the switch fabric 102. In some embodiments, for example, the edge devices 182, 184, 186 can be access switches, input/output modules, top-of-rack devices and/or the like. Structurally, the edge devices 182, 184, 186 can function as both source edge devices and destination edge devices. Accordingly, the edge devices 182, 184, 186 can send data (e.g., a data stream of data packets and/or data cells) to and receive data from the switch fabric 102, and to and from the connected peripheral processing devices 114, 124, 134.
In some embodiments, the edge devices 182, 184, 186 can be a combination of hardware modules and software modules (executing in hardware). In some embodiments, for example, each edge device 182, 184, 186 can include a field-programmable gate array (FPGA), an application specific integrated circuit (ASIC), a digital signal processor (DSP) and/or the like.
Each of the edge devices 182, 184, 186 can communicate with the other edge devices 182, 184, 186 via the switch fabric 102. Specifically, the switch fabric 102 provides any-to-any connectivity between the edge devices 182, 184, 186 at relatively low latency. For example, switch fabric 102 can transmit (e.g., convey) data between edge devices 182, 184, 186. In some embodiments, the switch fabric 102 can have at least hundreds or thousands of ports (e.g., egress ports and/or ingress ports) through which edge devices such as edge devices 182, 184, 186 can transmit and/or receive data.
Ports 211, 212, 221 and 222 can be similar to the ports of the edge devices 182, 184, 186 operatively coupled to peripheral processing devices 114, 124, 134. For example, ports 211, 212, 221 and 222 can implement a physical layer using twisted-pair electrical signaling via electrical cables or fiber-optic signaling via fiber-optic cables. In some embodiments, some of ports 211, 212, 221 and 222 implement one physical layer such as twisted-pair electrical signaling and others of ports 211, 212, 221 and 222 implement a different physical layer such as fiber-optic signaling. Furthermore, ports 211, 212, 221 and 222 can be configured to allow edge device 200 to communicate with peripheral processing devices, such as, for example, computer servers (servers), via a common protocol such as Ethernet or Fibre Channel. In some embodiments, some of ports 211, 212, 221 and 222 implement one protocol such as Ethernet and others of ports 211, 212, 221 and 222 implement a different protocol such as Fibre Channel. Thus, edge device 200 can be in communication with multiple peripheral processing devices using homogeneous or heterogeneous physical layers and/or protocols via ports 211, 212, 221 and 222.
Port 231 can be configured to be in communication with other edge devices via a communications network such as switch fabric 102. Port 231 can be part of one or more network interface devices (e.g., a 40 Gigabit (Gb) Ethernet interface, a 100 Gb Ethernet interface, etc.) through which the edge device 200 can send signals to and/or receive signals from a communications network. The signals can be sent to and/or received from the communications network via an electrical link, an optical link and/or a wireless link operatively coupled to the edge device 200. In some embodiments, the edge device 200 can be configured to send signals to and/or receive signals from the communications network based on one or more protocols (e.g., an Ethernet protocol, a multi-protocol label switching (MPLS) protocol, a Fibre Channel protocol, a Fibre-Channel-over Ethernet protocol, an Infiniband-related protocol, a cell-base protocol).
In some embodiments, port 231 can implement a different physical layer and/or protocol than those implemented at ports 211, 212, 221 and 222. For example, port 211, 212, 221 and 222 can be configured to communicate with peripheral processing devices using a data link layer protocol based on data packets, and port 231 can be configured to communicate via a switch fabric (e.g., switch fabric 102) using a data link layer protocol based on data cells. Said differently, edge device 200 can be an edge device of a network switch such as a distributed network switch.
In some embodiments, the edge device 200 can be configured to prepare a data packet (e.g., an Ethernet frame and/or packet) to enter a data plane portion of a communications network (e.g., switch fabric 102). For example, the edge device 200 can be configured to forward, classify, and/or modify the packet encapsulation (e.g., modify, add and/or remove a header portion, footer portion and/or any other identifier included within the data packet) of a data packet prior to sending the data packet to the communications network. Additionally, the edge device 200 can be configured to partition and/or divide the data packet into data cells (e.g., having fixed length payloads) prior to sending the data cells to the switch fabric. Additional details related to packet classification are described in U.S. patent application Ser. No. 12/242,168 entitled “Methods and Apparatus Related to Packet Classification Associated with a Multi-Stage Switch,” filed Sep. 30, 2008, and U.S. patent application Ser. No. 12/242,172, entitled “Methods and Apparatus for Packet Classification Based on Policy Vectors,” filed Sep. 30, 2008, both of which are incorporated herein by reference in their entireties.
Returning to
Each network control entity 192, 194, 196 can send and/or distribute forwarding-state information (e.g., port identifiers, network segment identifiers, peripheral processing device identifiers, edge device identifiers, data plane module identifiers, next hop references, next hop identifiers, etc.) over the control plane for a set of ports that network control entity 192, 194, 196 manages. As discussed in further detail herein, for example, the network control entity 196 can send, via the control plane, forwarding-state information associated with the port at edge device 182 to which the peripheral processing device 134′ is coupled, to the network control entity 194. Using the received forwarding-state information, the edge device 184 can address and send a data packet received from the peripheral processing device 124′ to the edge device 186, via the switch fabric 102.
In some embodiments and as described in further detail herein, the network control entity 196 can send forwarding-state information to the network control entity 194 using a targeted higher level protocol (e.g., an application layer protocol) such as, for example, Border Gateway Protocol (BGP). In such embodiments, the network control entity 196 can send the forwarding-state information using such a higher level protocol in conjunction with any suitable lower level protocol (e.g., a data link layer protocol), such as, for example, Ethernet and/or Fibre Channel. While BGP can be implemented at the application layer, it can be used to send forwarding-state information used to populate a routing table (e.g., at the network control entity 194) associated with a network layer. Using a targeted protocol, such as BGP, the network control entity 192 can send the forwarding-state information to specific network control entities (e.g., 194) while refraining from sending the forwarding-state information to other network control entities (e.g., 192).
In some embodiments, a network control entity 192, 194, 196 can control and/or manage ports at an edge device 182, 184, 186 at which the network control entity 192, 194, 196 is located. In other embodiments, a network control entity can also control and/or manage ports and/or data plane modules at an edge device other than the edge device at which the network control entity is located. In such embodiments, the network management module 160 has flexibility to assign each port to a network control entity 192, 194, 196 based on processing capacity, as described in further detail herein. Additionally, in such embodiments, the network management module 160 is not constrained by the physical location of the network control entities 192, 194, 196 and/or the ports when assigning the ports to a network control entity 192, 194, 196. Moreover, while each edge device 182, 184, 186 is shown in
In some embodiments, the ports associated with multiple network control entities 192, 194, 196 can form a virtual switch fabric system. Such a virtual switch fabric system can be a group and/or collection of network control entities (and their associated ports) that share forwarding-state information with the other network control entities within the virtual switch fabric system, but not those network control entities outside of the same virtual switch fabric system. A rule and/or policy implemented at a network control entity 192, 194, 196 and/or the network management module 160 can prevent and/or restrict a network control entity of a first virtual switch fabric system from sending forwarding-state information to a network control entity of a second virtual switch fabric system. Accordingly, because forwarding-state information is not exchanged between the network control entities of the first virtual switch fabric system and the network control entities of the second virtual switch fabric system, the peripheral processing devices operatively coupled to ports associated with the network control entities of the first virtual switch fabric system do not send data packets to the peripheral processing devices operatively coupled to ports associated with the network control entities of the second virtual switch fabric system. For example, a first organization assigned to a first virtual switch fabric system can protect data transmitted over switch fabric 102 from being sent to and/or viewed by a second organization associated with a second virtual switch fabric system. Each network control entity within a given virtual switch fabric system can be assigned a virtual switch fabric identifier by network management module 160. In some embodiments, the virtual switch fabric identifier can be provided by network management module 160. In some embodiments, a virtual switch fabric system can also be referred to as a network segment, a sub-network or a virtual network.
In some embodiments, network management module 160 can be a process, application, virtual machine and/or some other software module (executing in hardware), or a hardware module, that is executed at a compute node (not shown in
The network management module 160 can be operatively coupled to the edge devices 182, 184, 186 via a control plane (not shown in
Network management module 160 can provision edge devices 182, 184, 186 when the edge devices 182, 184, 186 are initially coupled to the switch fabric system 100. More specifically, as described in further detail herein, when an edge device is initially connected to the switch fabric system 100, network management module 160 can assign a device identifier to this newly connected edge device. Such a device identifier can be, for example, a physical address (e.g., media access control (MAC), etc.), a logical address (e.g., internet protocol (IP), etc.) and/or any other suitable address. In some embodiments the device identifier is assigned using a dynamic address assigning protocol (e.g., Dynamic Host Configuration Protocol (DHCP), etc.). As discussed in further detail herein, an initiation signal and/or a provisioning signal can be formatted and sent from an edge device 182, 184, 186 to the network management module 160 or from the network management module 160 to an edge device 182, 184, 186, respectively, using a broadcast protocol such as, for example, an Intermediate System to Intermediate System (IS-IS) protocol. In such embodiments, provisioning information can be encoded as a type-length-value (TLV) element inside the initiation signal and/or provisioning signal.
In some embodiments, the network management module 160 can assign and/or associate other identifiers to the newly-connected edge device. In some embodiments, for example, the network management module 160 can assign a virtual switch fabric system identifier, associating that edge device with a particular virtual switch fabric system. In other embodiments, any other identifier and/or association can be assigned to the newly-connected edge device by the network management module 160.
In some embodiments, the network management module 160 can also monitor an available processing capacity of each network control entity 182, 184, 186 and initiate and/or terminate network control entities 182, 184, 186 when an available processing capacity of a network control entity 182, 184, 186 crosses (e.g., falls below) a first threshold and/or crosses (e.g., exceeds) a second threshold, respectively. Such initiation and termination of network control entities can be similar to that described in co-pending U.S. patent application Ser. No. 12/968,848, filed on Dec. 15, 2010, and entitled “Methods and Apparatus for Dynamic Resource Management within a Distributed Control Plane of a Switch,” which is incorporated herein by reference in its entirety. Additionally, the network management module 160 can reassign ports to different network control entities as the available processing capacities of the network control entities 182, 184, 186 fluctuate.
The switch fabric 102 can be any suitable switch fabric that operatively couples the edge devices 182, 184, 186 to the other edge devices 182, 184, 186. In some embodiments, for example, the switch fabric 102 can be a Clos network (e.g., a non-blocking Clos network, a strict sense non-blocking Clos network, a Benes network) having multiple stages of switching modules (e.g., integrated Ethernet switches). In some embodiments, for example, the switch fabric 102 shown in
In some embodiments, the switch fabric 102 can be (e.g., can function as) a single consolidated switch (e.g., a single large-scale consolidated L2/L3 switch). In other words, the switch fabric 102 can operate as a single logical entity (e.g., a single logical network element). Similarly stated, the switch fabric 102 can be part of a single logical hop between a first edge device 182, 184, 186 and a second edge device 182, 184, 186 (e.g., along with the data paths between the edge devices 182, 184, 186 and the switch fabric 102). The switch fabric 102 can connect (e.g., facilitate communication between) the peripheral processing devices 114, 124, 134. In some embodiments, the switch fabric 102 can communicate via interface devices (not shown) that transmit data at a rate of at least 10 Gb/s. In some embodiments, the switch fabric 102 can communicate via interface devices (e.g., Fibre-Channel interface devices) that transmit data at a rate of, for example, 2 Gb/s, 4, Gb/s, 8 Gb/s, 10 Gb/s, 40 Gb/s, 100 Gb/s and/or faster link speeds.
Although the switch fabric 102 can be logically centralized, the implementation of the switch fabric 102 can be highly distributed, for example, for reliability. For example, portions of the switch fabric 102 can be physically distributed across, for example, many chassis. In some embodiments, for example, a processing stage of the switch fabric 102 can be included in a first chassis and another processing stage of the switch fabric 102 can be included in a second chassis. Both of the processing stages can logically function as part of a single consolidated switch (e.g., within the same logical hop) but have a separate single physical hop between respective pairs of processing stages.
In use, when an edge device (e.g., edge device 186) is initially connected to the switch fabric system 100, that edge device 186 can transmit an initiation signal over the control plane using a broadcast protocol (e.g., Intermediate System (IS-IS), Open Shortest Path First (OSPF), etc.) to the other devices connected to the control plane (e.g., network management module 160, edge devices 182, 184, 186) to indicate and/or advertise its presence. As described in further detail herein, the network management module 160 sends a provisioning signal back to that edge device 186. As discussed above and in further detail herein, such a provisioning signal can provide a device identifier and/or any other appropriate identifier and/or information to the edge device 186. Additionally, in some embodiments, the provisioning signal can initiate a network control entity 196 at the edge device 186 and assign that network control entity 196 to a virtual switch fabric system. In assigning the network control entity 196 to a virtual switch fabric, the network management module 160 can also provide the network control entity 196 an address and/or identifier of each of the other network control entities within that virtual switch fabric system. In other embodiments, the provisioning signal can assign the ports at the edge device 186 to a network control entity at another edge device 182, 184. As described in further detail herein, in some embodiments, such initiation and/or provisioning information can be provided in a TLV portion of an IS-IS message.
After provisioning is complete, the network control entity 196 can use a selective protocol (e.g., Border Gateway Protocol and/or the like) to provide forwarding-state information to the other network control entities associated with the same virtual switch fabric system but not to the network control entities outside of the same virtual switch fabric system. Such forwarding-state information (e.g., port identifiers, network segment identifiers, peripheral processing device identifiers, edge device identifiers, data plane module identifiers, next hop references, next hop identifiers, etc.) includes information related to and/or can be associated with the peripheral processing devices 134 operatively coupled to the edge device 186. The other network control entities associated with the same virtual switch fabric system as the edge device 186 can receive and store the forwarding-state information in a routing, switching and/or lookup table. Because a selective protocol, such as BGP, is used to send the forwarding-state information to the other network control entities, the network control entity 196 sends its forwarding-state information to the network control entities that are part of the same virtual switch fabric system without sending it to network control entities associated with other virtual switch fabric systems. Using a selective protocol also reduces the amount of traffic and/or congestion that would otherwise be on the control plane of the switch fabric system 100.
After forwarding-state information has been exchanged between network control entities of the same virtual switch fabric system, the network control entities can send and/or store the forwarding-state information at a data plane module of the edge devices having ports associated with each of the network control entities. For example, the network control entity 194 can store the forwarding-state information in a routing, switching and/or lookup table associated with a data plane module (not shown) of the edge device 184. More specifically, the network control entity 194 can store the forwarding-state information in a memory at the edge device 184 (e.g., memory 252 of
A data packet (e.g., an Ethernet packet) can be sent between peripheral processing devices 114, 124, 134 associated with the same virtual switch fabric system via the switch fabric system 100. For example, a data packet can be sent from a first peripheral processing device 124′ to a second peripheral processing device 134′ via path 195 through the data plane of the switch fabric system 100. Peripheral processing device 124′ transmits the data packet to the data plane module (not shown) at the edge device 184. Such a data packet includes a header with the device identifier of destination peripheral processing device 134′. The data plane module of the edge device 184 can retrieve the forwarding-state information associated with the peripheral processing device 134′ from the lookup, routing and/or switching table stored in a memory of the edge device 184. More specifically, the data plane module at the edge device 184 can use a destination identifier associated with the peripheral processing device 134′ and in a header portion of the data packet to query the lookup, routing and/or switching table for the appropriate forwarding-state information. The data plane module can then append such forwarding-state information to the data packet and send the data packet to the switch fabric 102. The switch fabric can use the appended forwarding-state information to route and/or switch the data packet through the switch fabric and to the edge device 186. The edge device 186 can then prepare and send the data packet to the peripheral processing device 134′.
In some embodiments, prior to being sent to the switch fabric 102, the edge device 184 can divide and/or partition the data packet into one or more data cells (e.g., fixed length frames of data). The cells can be forwarded, routed and/or switched to the edge device 186 via the switch fabric 102. The edge device 186 can reassemble the data packet from the data cells prior to sending the data packet to the peripheral processing device 134′.
Data paths 305 operatively couple the edge devices 310, 320, 330 and the compute device 350 with each other. The data paths 305 can include optical links, electrical links, wireless links and/or the like. Accordingly, the edge devices 310, 320, 330 and/or the compute device 350 can send signals to and/or receive signals from the other edge devices 310, 320, 330 and/or the compute device 350 via the control plane connections (i.e., data paths 305). In some embodiments and as shown in
In some embodiments, an address and/or identifier (e.g., a MAC address, IP address, etc.) of network management module 355 can be dynamic. Similarly stated, the address and/or identifier of the network management module 355 is not fixed and can change each time the network management module 355 and/or the compute device 350 reboots and/or is reconfigured. In such a manner, the address of the network management module 355 can adapt and/or be established according to the characteristics and/or requirements of the specific switch fabric system. In other embodiments, the address and/or identifier of the network management module 355 can be fixed such that it remains the same each time the compute device 350 reboots and/or is reconfigured.
Additionally, as described in further detail herein, the network management module 355 can be configured to listen for initiation signals (e.g., initiation signal 362) sent over the control plane on a fixed multicast address. In some embodiments, such a fixed multicast address can be the same each time the network management module 355 and/or the compute device 350 reboots and/or is reconfigured. In other embodiments, the multicast address can be dynamic such that it does not remain the same each time the network management module 355 and/or the compute device 350 reboots and/or is reconfigured.
In use, a network administrator and/or other user can physically couple an edge device (e.g., edge device 320) to the switch fabric system. Such a physical connection couples the edge device 320 to the compute device 350 and the other edge devices 310, 330 within in the control plane 300 of the switch fabric system. Similarly stated, physical connections (e.g., data paths 305) are established between the edge device 320 and the compute device 350 and the other edge devices 310, 330. Additionally, in some embodiments, the edge device 320 is operatively coupled to a data plane of the switch fabric system (i.e., a switch fabric similar to switch fabric 102) when the network administrator and/or other user physically couples the edge device 320 to the switch fabric system.
After the edge device 320 is physically coupled to the switch fabric system, the edge device 320 can send within the control plane 300 an initiation signal 362 to the other devices (e.g., edge devices 310, 330 and compute device 350) on the fixed multicast address using a broadcast protocol (e.g., IS-IS, OSPF, etc.). Similarly stated, the edge device 320 can broadcast its presence in the switch fabric system over the control plane 300. Because a broadcast protocol (e.g., IS-IS, OSPF, etc.) is used to send the initiation signal, the network management module 355 can have a dynamic address and/or identifier, as described above. In other embodiments, the network management module 355 can have a fixed address and/or identifier and the initiation signal can be sent to that address using a targeted protocol (e.g., the initiation signal can be sent to the network management module 355 without being sent to the other edge devices 310, 330).
The initiation signal 362 can include any suitable information to be used by the network management module 355 to provision the edge device 320. In some embodiments, for example, the initiation information can include a type of ports (e.g., Fibre-Channel, Ethernet, etc.) of the edge device 320, the speed of the ports of the edge device 320, information associated with the peripheral processing devices operatively coupled to the ports of the edge device, the port, slot and/or chassis of the switch fabric system to which the edge device 320 is coupled, and/or the like.
In some embodiments, such initiation information can be included within a type-length-value (TLV) portion of an IS-IS message. A TLV portion of a message can represent the data by indicating the type of data (e.g., type of ports, speed of the ports, etc.), the length of the data (e.g., the size), followed by the value of the data (e.g., an identifier indicating the type of ports, the speed of the ports, etc.). Accordingly, using TLV portions of a message, the types, lengths and values of the initiation information can be easily parsed by the network management module.
The network management module 355 can actively listen on and/or monitor a fixed multicast address for initiation signals, such as initiation signal 362. Accordingly, when the edge device 320 sends the initiation signal 362 on the fixed multicast address, the network management module 355 can receive the initiation signal 362. In some embodiments, the other edge devices 310, 330 are configured to discard initiation signals received on the fixed multicast address. In other embodiments, the other edge devices 310, 330 can receive the initiation signals at the fixed multicast address and store the information contained therein.
The network management module 355 can provision the edge device 320 based on the initiation signal 362. In some embodiments, for example, the network management module 355 can assign a device identifier and/or address to the edge device 320 and/or the ports of the edge device 320. Additionally, as shown in
Returning to
In some embodiments, the provisioning signal 364 is sent to the same multicast address as the initiation signal 362. In other embodiments, the provisioning signal 364 is sent to a different multicast address as the initiation signal 362. In either embodiment, the edge devices 310, 320, 330 (and/or the network control entities 312, 321, 322, 332 at the edge devices 310, 320, 330) can listen to and/or monitor the appropriate multicast address to receive the provisioning signal 364. As described in further detail herein, use of such a broadcast protocol allows the switch fabric system to be automatically provisioned such that network control entities within the switch fabric system can share forwarding-state information using a targeted protocol such as the Border Gateway Protocol (BGP). Similarly stated, the routing tables at the network control entities 312, 321, 322, 332 at the edge devices 310, 320, 330 can be automatically populated with the addresses and/or identifiers of the other network control entities 312, 321, 322, 332. As such, a system administrator does not need to manually configure the network control entities 312, 321, 322, 332 as BGP peers.
Upon receiving such provisioning information, for example, the edge device 320 can initiate the network control entities 321, 322 and/or the other edge devices 310, 330 can store the addresses and/or identifiers of the network control entities 321, 322. In some embodiments, any other suitable rules, policies, and/or identifiers can be provided to the edge device 320 to be provisioned and/or the other edge devices 310, 330 via the provisioning signal 364.
In some embodiments, before storing and/or implementing the information and/or instructions within the provisioning signal 364, the edge devices 310, 330 can parse the received provisioning signal 364 for virtual switch fabric identifiers associated with the network control entities 321, 322 to be initiated at the edge device 320. If the edge device 310 or 330 does not have a network control entity associated with the same virtual switch fabric system as one of the network control entities 321, 322, that edge device 310 or 330 can discard the provisioning signal 364. Alternatively, if that edge device 310 or 330 includes a network control entity associated with the same virtual switch fabric system as at least one of the network control entities 321, 322, that edge device 310 or 330 can store and/or implement the relevant portion of the provisioning signal 364.
After the edge device 320 has been provisioned (e.g., the network control entities 321, 322 initiated, the addresses and/or identifiers of the edge device 320 and/or the network control entities 321, 322 made available to the other edge devices 310, 330 and/or network control entities 312, 332, rules and/or policies implemented, and/or the like), the other edge devices 310 and/or the network control entities 312, 332 at the other edge devices 310, 330 can send addresses and/or identifiers to the edge device 320 and/or the network control entities 321, 322. In some embodiments, network control entities associated with a same virtual switch fabric system as the network control entities 321, 322 send such information to the network control entities 321, 322, while network control entities not associated with the same virtual switch fabric system do not send such information. In some embodiments, such information can be sent similar to forwarding-state information using a targeted protocol such as BGP. In other embodiments, such information is broadcast on a multicast address using a broadcast protocol, such as IS-IS. In such a manner the edge device 320 and/or the network control entities 321, 322 can receive the addresses and/or identifiers of the other edge devices 310, 330 within the switch fabric system.
Each of the edge devices 310, 320, 330 includes at least one network control entity 312, 321, 322, 332 to manage a group of ports 360, 362, 364, 366. Specifically, the edge device 310 includes network control entity 312 that manages the group of ports 366 (i.e., ports 316-318); the edge device 320 includes network control entity 321 that manages the group of ports 362 (i.e., ports 327 and 328) and the network control entity 322 that manages the group of ports 360 (i.e., ports 315, 325 and 326); and the edge device 330 includes network control entity 332 that manages the group of ports 364 (i.e., ports 335-337). While
A network control entity can manage forwarding-state information for all ports of an edge device, a subset of ports associated with an edge device, or a set of ports associated with two or more edge devices. For example, the group of ports 366 includes ports 316, 317, 318 located at edge device 310 and managed by network control entity 312, also located at edge device 310. Similarly, the group of ports 362 and the group of ports 364 both include ports 327-328 and 335-337 located at edge devices 320 and 330, respectively, and are managed by network control entities 321 and 332, respectively. The group of ports 360, however, includes ports 315, 325, 326 located at both edge device 310 and edge device 320. As shown in
As described above, the network management module 355 can reassign network control entities by, for example, sending a provisioning signal over the control plane. For example, a port 315-318, 325-328, 335-337 can be assigned to a different network control entity 312, 322, 332 when available processing capacity at the currently assigned network control entity 312, 322, 332 crosses a threshold. In other embodiments, a port 315-318, 325-328, 335-337 can be reassigned to a different network control entity 312, 322, 332 to improve traffic flow over a portion of the control plane.
Peripheral processing devices can be operatively coupled to the ports 315-318, 325-328, 335-337 of the edge devices 310, 320, 330. Such peripheral processing devices can be similar to the peripheral processing devices 114, 124, 134, shown and described above with respect to
Each network control entity 312, 321, 322, 332 can send forwarding-state information (e.g., port identifiers, network segment identifiers, peripheral processing device identifiers, edge device identifiers, data plane module identifiers, next hop references, next hop identifiers, etc.) to the other network control entities 312, 321, 322, 332 via the logical connections 307. Consider the following example. The network control entity 321 can detect a change in state at the port 327. For example, after a peripheral processing device (not shown) is initially coupled to the port 327, the peripheral processing device can send forwarding-state information associated with that peripheral processing device to the network control entity 321. In some embodiments, such forwarding-state information can include a peripheral processing device identifier associated with the peripheral processing device, such as, for example, a media access control (MAC) address, an interne protocol (IP) address, and/or the like.
The network control entity 321 can update and/or revise its configuration table accordingly. The network control entity 321 can then send updated forwarding-state information 370 to the network control entity 322, as shown in
In some embodiments, the network control entity 321 can send the forwarding-state information 370 to the network control entity 322 using a targeted higher level protocol (e.g., an application layer protocol) such as, for example, Border Gateway Protocol (BGP). In such embodiments, the network control entity 321 can use such a higher level protocol in conjunction with any suitable lower level protocol (e.g., a data link layer protocol), such as, for example, Ethernet and/or Fibre Channel, to send the forwarding-state information 370. While BGP can be implemented at the application layer, it can be used to send forwarding-state information used to populate a routing table (e.g., at the network control entity 322) associated with a network layer. Using a targeted protocol, such as BGP, the network control entity 321 can send the forwarding-state information 370 to specific network control entities (e.g., 322) while refraining from sending the forwarding-state information to other network control entities (e.g., 312).
In some embodiments, the network control entity 322 can store the forwarding-state information 370 received from the network control entity 321 in a memory associated with the network control entity 322. For example, the network control entity 322 can store the forwarding-state information 370 at the memory (e.g., memory 252 of
The network control entity 322 can then send the updated forwarding-state information 370 to data plane modules (not shown) at the edge devices 320, 310 at which ports 315, 325, 326 associated with the network control entity 322 are located. In some embodiments, for example, the network control entity 322 can store the forwarding-state information 370 at a portion of the memory (e.g., within a routing table) of the edge device 320 allocated and/or partitioned for data, processes and/or applications associated with the data plane. In such embodiments, the memory of the edge device 320 can store the forwarding-state information 370 in a portion of the memory associated with the network control entity 322 as well as in a portion of the memory associated with the data plane module. In other embodiments, the forwarding-state information 370 is stored within a single location within the memory of the edge device 320 accessible by the applicable processes at the edge device 320 (including the network control entity 322 and the data plane module). The network control entity 322 also sends the forwarding-state information 370 to a data plane module at the edge device 310 (port 315 at edge device 310 is associated with the network control entity 322). Similar to the edge device 320, the edge device 310 can store the forwarding-state information within a memory (e.g., within a routing table). In such a manner, forwarding-state information can be distributed to the applicable data plane modules. Additionally, in such a manner, forwarding-state information can be updated at the network control entities 312, 321, 322, 332 each time the topology of the switch fabric system is updated.
In some embodiments, the network control entity 312 can be part of a different virtual switch fabric system (e.g., network segment) than the network control entities 321 and 322. In such embodiments, the network control entity 321 can send forwarding-state information 370 to the network control entities (e.g., 322) associated with the same virtual switch fabric system while refraining from sending the forwarding-state information to the network control entities (e.g., 312) outside of that virtual switch fabric system and/or associated with another virtual switch fabric system. In such a manner, multiple virtual switch fabric systems (e.g., network segments) can be defined within the switch fabric system 300. In other embodiments, the network control entity 321 also sends the updated forwarding-state information 370 to the network control entity 312. In such embodiments, the network control entity 312 can determine that the forwarding-state information 370 is associated with a different virtual switch fabric system and, accordingly, discard the forwarding-state information 370.
After the current forwarding-state information 370 has been distributed to the appropriate network control entities, a source peripheral processing device can send a data packet to a destination peripheral processing device (see, e.g.,
A second signal is received from a network management module, at 604. A first network control entity is initiated at the edge device in response to the second signal, at 606. Additionally, the first network control entity is assigned a device identifier and a virtual switch fabric system identifier associated with a virtual switch fabric system from the multiple virtual switch fabric systems in response to the second signal, at 608. The first network control entity manages at least a portion of the edge device. In some embodiments, for example, the first network control entity manages forwarding-state information associated with at least one port at the edge device. In such embodiments, the at least one port and a peripheral processing device coupled to the at least one port can be said to be associated with the first network control entity.
Forwarding-state information associated with a peripheral processing device operatively coupled to the edge device is sent, using the first network control entity, to a second network control entity associated with the virtual switch fabric system using a selective protocol, at 610. The selective protocol can be used such that the forwarding-state information is sent to the network control entities associated with the virtual switch fabric system but not network control entities associated with other virtual switch fabric systems. Accordingly, the forwarding-state information is sent to the network control entities associated with the same virtual switch fabric system. As discussed above, in some embodiments, the selective protocol can be the Border Gateway Protocol (BGP). As such, the first network control entity and the second network control entity can be said to be BGP speakers
While various embodiments have been described above, it should be understood that they have been presented by way of example only, and not limitation. Where methods described above indicate certain events occurring in certain order, the ordering of certain events may be modified. Additionally, certain of the events may be performed concurrently in a parallel process when possible, as well as performed sequentially as described above.
Embodiments shown and described above refer to multiple peripheral processing devices, including compute notes, storage nodes, service nodes and routers. In some embodiments, one or more of the compute nodes can be general-purpose computational engines that can include, for example, processors, memory, and/or one or more network interface devices (e.g., a network interface card (NIC)). In some embodiments, the processors within a compute node can be part of one or more cache coherent domains. In some embodiments, the compute nodes can be host devices, servers, and/or so forth. In some embodiments, one or more of the compute nodes can have virtualized resources such that any compute node (or a portion thereof) can be substituted for any other compute node (or a portion thereof) operatively coupled to a switch fabric system.
In some embodiments, one or more of the storage nodes can be devices that include, for example, processors, memory, locally-attached disk storage, and/or one or more network interface devices. In some embodiments, the storage nodes can have specialized modules (e.g., hardware modules and/or software modules) configured to enable, for example, one or more of the compute nodes to read data from and/or write data to one or more of the storage nodes via a switch fabric. In some embodiments, one or more of the storage nodes can have virtualized resources so that any storage node (or a portion thereof) can be substituted for any other storage node (or a portion thereof) operatively coupled to a switch fabric system.
In some embodiments, one or more of the services nodes can be an open systems interconnection (OSI) layer-4 through layer-7 device that can include, for example, processors (e.g., network processors), memory, and/or one or more network interface devices (e.g., 10 Gb Ethernet devices). In some embodiments, the services nodes can include hardware and/or software configured to perform computations on relatively heavy network workloads. In some embodiments, the services nodes can be configured to perform computations on a per packet basis in a relatively efficient fashion (e.g., more efficiently than can be performed at, for example, a compute node). The computations can include, for example, stateful firewall computations, intrusion detection and prevention (IDP) computations, extensible markup language (XML) acceleration computations, transmission control protocol (TCP) termination computations, and/or application-level load-balancing computations. In some embodiments, one or more of the services nodes can have virtualized resources so that any service node (or a portion thereof) can be substituted for any other service node (or a portion thereof) operatively coupled to a switch fabric system.
In some embodiments, one or more of the routers can be networking devices configured to connect at least a portion of a switch fabric system (e.g., a data center) to another network (e.g., the global Internet). In some embodiments, for example, a router can enable communication between components (e.g., peripheral processing devices, portions of the switch fabric) associated with a switch fabric system. The communication can be defined based on, for example, a layer-3 routing protocol. In some embodiments, one or more of the routers can have one or more network interface devices (e.g., 10 Gb Ethernet devices) through which the routers can send signals to and/or receive signals from, for example, a switch fabric and/or other peripheral processing devices.
Some embodiments described herein relate to a computer storage product with a non-transitory computer-readable medium (also can be referred to as a non-transitory processor-readable medium) having instructions or computer code thereon for performing various computer-implemented operations. The computer-readable medium (or processor-readable medium) is non-transitory in the sense that it does not include transitory propagating signals per se (e.g., a propagating electromagnetic wave carrying information on a transmission medium such as space or a cable). The media and computer code (also can be referred to as code) may be those designed and constructed for the specific purpose or purposes. Examples of non-transitory computer-readable media include, but are not limited to: magnetic storage media such as hard disks, floppy disks, and magnetic tape; optical storage media such as Compact Disc/Digital Video Discs (CD/DVDs), Compact Disc-Read Only Memories (CD-ROMs), and holographic devices; magneto-optical storage media such as optical disks; carrier wave signal processing modules; and hardware devices that are specially configured to store and execute program code, such as Application-Specific Integrated Circuits (ASICs), Programmable Logic Devices (PLDs), Read-Only Memory (ROM) and Random-Access Memory (RAM) devices.
Examples of computer code include, but are not limited to, micro-code or micro-instructions, machine instructions, such as produced by a compiler, code used to produce a web service, and files containing higher-level instructions that are executed by a computer using an interpreter. For example, embodiments may be implemented using Java, C++, or other programming languages (e.g., object-oriented programming languages) and development tools. Additional examples of computer code include, but are not limited to, control signals, encrypted code, and compressed code.
While various embodiments have been described above, it should be understood that they have been presented by way of example only, not limitation, and various changes in form and details may be made. Any portion of the apparatus and/or methods described herein may be combined in any combination, except mutually exclusive combinations. The embodiments described herein can include various combinations and/or sub-combinations of the functions, components and/or features of the different embodiments described.
This application claims priority to, and the benefit of U.S. Provisional Patent Application Ser. No. 61/316,720, filed on Mar. 23, 2010, and entitled “Methods And Apparatus Related To Distributed Control Plane Switch Management.”
Number | Name | Date | Kind |
---|---|---|---|
4942574 | Zelle | Jul 1990 | A |
5138615 | Lamport et al. | Aug 1992 | A |
5367520 | Cordell | Nov 1994 | A |
5801641 | Yang et al. | Sep 1998 | A |
5825772 | Dobbins et al. | Oct 1998 | A |
5913921 | Tosey et al. | Jun 1999 | A |
5926473 | Gridley | Jul 1999 | A |
5987028 | Yang et al. | Nov 1999 | A |
5991295 | Tout et al. | Nov 1999 | A |
5991297 | Palnati | Nov 1999 | A |
6049542 | Prasad | Apr 2000 | A |
6049546 | Ramakrishnan | Apr 2000 | A |
6075773 | Clark et al. | Jun 2000 | A |
6212183 | Wilford | Apr 2001 | B1 |
6246692 | Dai et al. | Jun 2001 | B1 |
6335930 | Lee | Jan 2002 | B1 |
6385198 | Ofek et al. | May 2002 | B1 |
6393026 | Irwin | May 2002 | B1 |
6473428 | Nichols et al. | Oct 2002 | B1 |
6553028 | Tang et al. | Apr 2003 | B1 |
6587470 | Elliot et al. | Jul 2003 | B1 |
6597689 | Chiu et al. | Jul 2003 | B1 |
6609153 | Salkewicz | Aug 2003 | B1 |
6639910 | Provencher et al. | Oct 2003 | B1 |
6654373 | Maher, III et al. | Nov 2003 | B1 |
6658481 | Basso et al. | Dec 2003 | B1 |
6665495 | Miles et al. | Dec 2003 | B1 |
6751238 | Lipp et al. | Jun 2004 | B1 |
6760339 | Noel et al. | Jul 2004 | B1 |
6816486 | Rogers | Nov 2004 | B1 |
6823454 | Hind et al. | Nov 2004 | B1 |
6829237 | Carson et al. | Dec 2004 | B2 |
6850704 | Dave | Feb 2005 | B1 |
6856620 | Dunsmore et al. | Feb 2005 | B1 |
6865673 | Nessett et al. | Mar 2005 | B1 |
6868082 | Allen, Jr. et al. | Mar 2005 | B1 |
6876652 | Bell et al. | Apr 2005 | B1 |
6934260 | Kanuri | Aug 2005 | B1 |
6978459 | Dennis et al. | Dec 2005 | B1 |
6990097 | Norman et al. | Jan 2006 | B2 |
7024592 | Voas et al. | Apr 2006 | B1 |
7046661 | Oki et al. | May 2006 | B2 |
7082134 | Lim et al. | Jul 2006 | B1 |
7088710 | Johnson et al. | Aug 2006 | B1 |
7173931 | Chao et al. | Feb 2007 | B2 |
7177919 | Truong et al. | Feb 2007 | B1 |
7178052 | Hebbar et al. | Feb 2007 | B2 |
7180862 | Peebles et al. | Feb 2007 | B2 |
7230947 | Huber et al. | Jun 2007 | B1 |
7233568 | Goodman et al. | Jun 2007 | B2 |
7245629 | Yip et al. | Jul 2007 | B1 |
7248760 | Corbalis et al. | Jul 2007 | B1 |
7277429 | Norman et al. | Oct 2007 | B2 |
7289513 | Medved et al. | Oct 2007 | B1 |
7295566 | Chiu et al. | Nov 2007 | B1 |
7315897 | Hardee et al. | Jan 2008 | B1 |
7330467 | Sharma | Feb 2008 | B2 |
7369561 | Wybenga et al. | May 2008 | B2 |
7391730 | Chandra et al. | Jun 2008 | B1 |
7406038 | Oelke et al. | Jul 2008 | B1 |
7408927 | George | Aug 2008 | B2 |
7415034 | Muller et al. | Aug 2008 | B2 |
7415627 | Radhakrishnan et al. | Aug 2008 | B1 |
7420933 | Booth et al. | Sep 2008 | B2 |
7428219 | Khosravi | Sep 2008 | B2 |
7430171 | Black et al. | Sep 2008 | B2 |
7437469 | Ellanti et al. | Oct 2008 | B2 |
7466703 | Arunachalam et al. | Dec 2008 | B1 |
7471676 | Wybenga et al. | Dec 2008 | B2 |
7489625 | Varma | Feb 2009 | B2 |
7496252 | Corbalis et al. | Feb 2009 | B1 |
7505458 | Menon et al. | Mar 2009 | B2 |
7519054 | Varma | Apr 2009 | B2 |
7564869 | Cafiero et al. | Jul 2009 | B2 |
7586909 | Walrand et al. | Sep 2009 | B1 |
7590102 | Varma | Sep 2009 | B2 |
7596614 | Saunderson et al. | Sep 2009 | B2 |
7606262 | Beshai et al. | Oct 2009 | B1 |
7630373 | Okuno | Dec 2009 | B2 |
7664123 | Ashwood Smith et al. | Feb 2010 | B2 |
7675912 | Ward et al. | Mar 2010 | B1 |
7688816 | Park et al. | Mar 2010 | B2 |
7702765 | Raszuk | Apr 2010 | B1 |
7715382 | Lakshman et al. | May 2010 | B2 |
7720064 | Rohde | May 2010 | B1 |
7733856 | Hongal et al. | Jun 2010 | B2 |
7746799 | Kokot et al. | Jun 2010 | B2 |
7751416 | Smith et al. | Jul 2010 | B2 |
7792993 | Hopprich et al. | Sep 2010 | B1 |
7830905 | Scott et al. | Nov 2010 | B2 |
7860097 | Lovett et al. | Dec 2010 | B1 |
7873693 | Mehrotra et al. | Jan 2011 | B1 |
7877483 | Finn | Jan 2011 | B1 |
7899930 | Turner et al. | Mar 2011 | B1 |
7961734 | Panwar et al. | Jun 2011 | B2 |
8054832 | Shukla | Nov 2011 | B1 |
8059680 | Minami et al. | Nov 2011 | B2 |
8089904 | Balasubramaniam et al. | Jan 2012 | B2 |
8175079 | Alapuranen et al. | May 2012 | B2 |
20020009078 | Wilson et al. | Jan 2002 | A1 |
20020019958 | Cantwell et al. | Feb 2002 | A1 |
20020051450 | Ganesh et al. | May 2002 | A1 |
20020061020 | Chao et al. | May 2002 | A1 |
20020064170 | Siu et al. | May 2002 | A1 |
20020118692 | Oberman et al. | Aug 2002 | A1 |
20020141397 | Piekarski et al. | Oct 2002 | A1 |
20020145974 | Saidi et al. | Oct 2002 | A1 |
20020159449 | Carson et al. | Oct 2002 | A1 |
20020168012 | Ramaswamy | Nov 2002 | A1 |
20030026287 | Mullendore et al. | Feb 2003 | A1 |
20030039212 | Lloyd et al. | Feb 2003 | A1 |
20030081540 | Jones et al. | May 2003 | A1 |
20030084219 | Yao et al. | May 2003 | A1 |
20030200330 | Oelke et al. | Oct 2003 | A1 |
20030200473 | Fung | Oct 2003 | A1 |
20030217122 | Roese et al. | Nov 2003 | A1 |
20030223420 | Ferolito | Dec 2003 | A1 |
20040023558 | Fowler et al. | Feb 2004 | A1 |
20040030766 | Witkowski | Feb 2004 | A1 |
20040034702 | He | Feb 2004 | A1 |
20040034864 | Barrett et al. | Feb 2004 | A1 |
20040039820 | Colby et al. | Feb 2004 | A1 |
20040039986 | Solomon et al. | Feb 2004 | A1 |
20040054866 | Blumenau et al. | Mar 2004 | A1 |
20040062202 | Storr | Apr 2004 | A1 |
20040064559 | Kupst et al. | Apr 2004 | A1 |
20040076151 | Fant et al. | Apr 2004 | A1 |
20040117438 | Considine et al. | Jun 2004 | A1 |
20040165598 | Shrimali et al. | Aug 2004 | A1 |
20040254909 | Testa | Dec 2004 | A1 |
20050002334 | Chao et al. | Jan 2005 | A1 |
20050025141 | Chao et al. | Feb 2005 | A1 |
20050055428 | Terai et al. | Mar 2005 | A1 |
20050102549 | Davies et al. | May 2005 | A1 |
20050129017 | Guingo et al. | Jun 2005 | A1 |
20050138346 | Cauthron | Jun 2005 | A1 |
20050175017 | Christensen et al. | Aug 2005 | A1 |
20050180438 | Ko et al. | Aug 2005 | A1 |
20050193114 | Colby et al. | Sep 2005 | A1 |
20050232258 | Wybenga et al. | Oct 2005 | A1 |
20050267959 | Monga et al. | Dec 2005 | A1 |
20060005185 | Nguyen | Jan 2006 | A1 |
20060018379 | Cooper | Jan 2006 | A1 |
20060029072 | Perera et al. | Feb 2006 | A1 |
20060092975 | Ansari et al. | May 2006 | A1 |
20060164199 | Gilde et al. | Jul 2006 | A1 |
20060165070 | Hall et al. | Jul 2006 | A1 |
20060165085 | Konda | Jul 2006 | A1 |
20060165098 | Varma | Jul 2006 | A1 |
20060165111 | Varma | Jul 2006 | A1 |
20060165112 | Varma | Jul 2006 | A1 |
20060198321 | Nadeau et al. | Sep 2006 | A1 |
20060269187 | Lin et al. | Nov 2006 | A1 |
20070002883 | Edsall et al. | Jan 2007 | A1 |
20070006056 | Lehner et al. | Jan 2007 | A1 |
20070036178 | Hares et al. | Feb 2007 | A1 |
20070073882 | Brown et al. | Mar 2007 | A1 |
20070106807 | Hegde et al. | May 2007 | A1 |
20070115918 | Bodin et al. | May 2007 | A1 |
20070121499 | Pal et al. | May 2007 | A1 |
20070136489 | Temoshenko et al. | Jun 2007 | A1 |
20070153462 | Crippen et al. | Jul 2007 | A1 |
20070189283 | Agarwal et al. | Aug 2007 | A1 |
20070280253 | Rooholamini et al. | Dec 2007 | A1 |
20070283045 | Nguyen et al. | Dec 2007 | A1 |
20070291535 | Eberle et al. | Dec 2007 | A1 |
20080031151 | Williams | Feb 2008 | A1 |
20080044181 | Sindhu | Feb 2008 | A1 |
20080065749 | Kucukyavuz et al. | Mar 2008 | A1 |
20080075071 | Beshai | Mar 2008 | A1 |
20080080548 | Mullendore et al. | Apr 2008 | A1 |
20080086768 | Mirza-Baig | Apr 2008 | A1 |
20080089323 | Elias et al. | Apr 2008 | A1 |
20080112133 | Torudbakken et al. | May 2008 | A1 |
20080126788 | Kreek et al. | May 2008 | A1 |
20080130517 | Lee et al. | Jun 2008 | A1 |
20080151863 | Lawrence et al. | Jun 2008 | A1 |
20080159277 | Vobbilisetty et al. | Jul 2008 | A1 |
20080163207 | Reumann et al. | Jul 2008 | A1 |
20080165704 | Marchetti et al. | Jul 2008 | A1 |
20080175239 | Sistanizadeh et al. | Jul 2008 | A1 |
20080186875 | Kitani | Aug 2008 | A1 |
20080192648 | Galles | Aug 2008 | A1 |
20080212472 | Musacchio et al. | Sep 2008 | A1 |
20080214059 | Rothermel et al. | Sep 2008 | A1 |
20080219184 | Fowler et al. | Sep 2008 | A1 |
20080259555 | Bechtolsheim et al. | Oct 2008 | A1 |
20080275975 | Pandey et al. | Nov 2008 | A1 |
20080285449 | Larsson et al. | Nov 2008 | A1 |
20080315985 | Johnsen et al. | Dec 2008 | A1 |
20080317025 | Manula et al. | Dec 2008 | A1 |
20080320117 | Johnsen et al. | Dec 2008 | A1 |
20090037585 | Miloushev et al. | Feb 2009 | A1 |
20090049191 | Tolliver | Feb 2009 | A1 |
20090052345 | Brown et al. | Feb 2009 | A1 |
20090070775 | Riley | Mar 2009 | A1 |
20090074414 | Miles et al. | Mar 2009 | A1 |
20090109963 | Tanizawa et al. | Apr 2009 | A1 |
20090129775 | Handelman | May 2009 | A1 |
20090161692 | Hirata et al. | Jun 2009 | A1 |
20090213779 | Zhang et al. | Aug 2009 | A1 |
20090214208 | Beshai | Aug 2009 | A1 |
20090219830 | Venner et al. | Sep 2009 | A1 |
20090271851 | Hoppe et al. | Oct 2009 | A1 |
20090300608 | Ferris et al. | Dec 2009 | A1 |
20090304010 | Kurebayashi et al. | Dec 2009 | A1 |
20090328024 | Li et al. | Dec 2009 | A1 |
20100002382 | Aybay et al. | Jan 2010 | A1 |
20100002714 | George et al. | Jan 2010 | A1 |
20100017497 | Brown et al. | Jan 2010 | A1 |
20100020806 | Vahdat et al. | Jan 2010 | A1 |
20100061240 | Sindhu et al. | Mar 2010 | A1 |
20100061241 | Sindhu et al. | Mar 2010 | A1 |
20100061367 | Sindhu et al. | Mar 2010 | A1 |
20100061389 | Sindhu et al. | Mar 2010 | A1 |
20100061391 | Sindhu et al. | Mar 2010 | A1 |
20100061394 | Sindhu et al. | Mar 2010 | A1 |
20100091779 | Juhl et al. | Apr 2010 | A1 |
20100097926 | Huang et al. | Apr 2010 | A1 |
20100165876 | Shukla et al. | Jul 2010 | A1 |
20100165877 | Shukla et al. | Jul 2010 | A1 |
20100169467 | Shukla et al. | Jul 2010 | A1 |
20100182933 | Hu et al. | Jul 2010 | A1 |
20100189121 | Beshai | Jul 2010 | A1 |
20100192202 | Ker | Jul 2010 | A1 |
20100214949 | Smith et al. | Aug 2010 | A1 |
20100265832 | Bajpay et al. | Oct 2010 | A1 |
20100306408 | Greenberg et al. | Dec 2010 | A1 |
20110052191 | Beshai | Mar 2011 | A1 |
20110069706 | Sen et al. | Mar 2011 | A1 |
20110161468 | Tuckey et al. | Jun 2011 | A1 |
20120033665 | Jacob Da Silva et al. | Feb 2012 | A1 |
20120069842 | Reddy et al. | Mar 2012 | A1 |
20120093154 | Rosenberg et al. | Apr 2012 | A1 |
20120128004 | Aybay et al. | May 2012 | A1 |
20120155320 | Vohra et al. | Jun 2012 | A1 |
20120155453 | Vohra | Jun 2012 | A1 |
20120158942 | Kalusivalingam et al. | Jun 2012 | A1 |
20120189009 | Shekhar et al. | Jul 2012 | A1 |
20130003726 | Sindhu et al. | Jan 2013 | A1 |
Number | Date | Country |
---|---|---|
101132286 | Feb 2008 | CN |
1 318 628 | Jun 2003 | EP |
1 758 320 | Feb 2007 | EP |
1 892 905 | Feb 2008 | EP |
1 924 030 | May 2008 | EP |
2 164 209 | Mar 2010 | EP |
2 413 550 | Jul 2011 | EP |
2 369 782 | Sep 2011 | EP |
2 456 138 | May 2012 | EP |
2 466 825 | Jun 2012 | EP |
2 466 826 | Jun 2012 | EP |
2 362 289 | Nov 2001 | GB |
WO 0008801 | Feb 2000 | WO |
WO 2008144927 | Dec 2008 | WO |
Entry |
---|
Extended European Search Report dated Aug. 29, 2013 for European Application No. EP11179619. |
F.K. Liotopoulos et al., “A Modular, 160 Gbps ATM Switch Architecture for Multimedia Networking Support, based on a 3-Stage Clos Network” Proceedings of the International Teletraffic Congress. ITC-16. Teletraffic Engineering in a Competitive World. Edinburgh, UK, Jun. 7, 1999, Amsterdam: Elsevier, NL, vol. 3A, XP000877657 ISBN: 978-0-444-50268-1 , pp. 529-538. |
K. Kompella et al., “Virtual Private LAN Service (VPLS) Using BGP for Auto-Discovery and Signaling” [online], Retrieved from the Internet: <URL: http://www.ietf.org/rfc/rfc4761.txt>, Jan. 2007, 27 pages. |
Cisco Systems, Inc., “Intermediate System-to-Intermediate System (IS-IS) TLVs” Document ID: 5739 [online], Retrieved from the Internet: <URL: http://www.cisco.com/en/US/tech/tk365/technologies—tech—note09186a0080094bbd.shtml>, Aug. 10, 2005, 8 pages. |
H. Jonathan Chao et al. “Matching Algorithms for Three-Stage Bufferless Clos Network Switches” IEEE Communications Magazine, Oct. 2003, pp. 46-54. |
Jonathan S. Turner et al. “Multirate Clos Networks” IEEE Communications Magazine, Oct. 2003, pp. 1-11. |
Office Action, mailed Mar. 5, 2014, for CN Application No. 201110319156.2. |
Office Action, dated Oct. 30, 2014, for Chinese Patent Application No. 201110319156.2. |
Office Action for Chinese Patent Application No. 201110319156.2 dated Apr. 28, 2015. |
Number | Date | Country | |
---|---|---|---|
20110238816 A1 | Sep 2011 | US |
Number | Date | Country | |
---|---|---|---|
61316720 | Mar 2010 | US |