This relates to communication networks, and more particularly, to communications networks having network switches that are controlled by a controller.
Packet-based networks such as the internet and local data networks that are connected to the internet include network switches. Network switches are used in forwarding packets from packet sources to packet destinations. The packets may be sometimes referred to as frames.
It can be difficult or impossible to control the switches of one vendor using the equipment of another vendor. This is because the switch equipment of one vendor may use a different operating system and set of control procedures than the switch equipment of another vendor. To address the challenges associated with controlling different types of switch platforms, cross-platform protocols have been developed. These protocols allow centralized control of otherwise incompatible switches.
Cross-platform controller clients can be included on the switches in a network. The controller clients are able to communicate with a corresponding controller server over network paths. Because the controller clients can be implemented on a variety of switch hardware, it is possible for a single controller to control switch equipment that might otherwise be incompatible.
The network may include end hosts that send network packets to the switches for forwarding through the network. End hosts in the network sometimes send broadcast network packets that are flooded throughout the network (i.e., the broadcast network packets are destined for all end hosts in the network). As an example, an end host may send broadcast network packets to discover network addresses of other end hosts. Flooding of a network associated with broadcasting network packets can generate undesirable amounts of network traffic (e.g., because the network packets may be forwarded by the network switches to many end hosts). Therefore, it may be desirable to provide the network with improved network packet broadcasting capabilities.
A network may include end hosts that are coupled to switches that are used to forward network packets between the end hosts. The switches may be controlled by a controller such as a centralized controller server or a distributed controller server. The controller may maintain information that identifies subsets of the end hosts that are associated with respective broadcast domains. The information may include a list of end hosts for each broadcast domain. The list of end hosts for each broadcast domain may be gathered by the controller from a user such as network administrator.
The controller may configure the switches in the network to identify broadcast network packets and to forward the broadcast network packets to the controller. For example, the controller may provide flow table entries to the switches that direct the switches to forward matching broadcast network packets to the controller. The controller may receive a given broadcast network packet from the switches and identify which broadcast domain is associated with that broadcast network packet (e.g., the controller may identify which subset of the end hosts is associated with the broadcast network packet).
The controller may identify which broadcast domain is associated with a received broadcast network packet based on information such as source information retrieved from the broadcast network packet. For example, the controller may retrieve source address information such as source Ethernet address information from header fields of the broadcast network packet and use the source address information to determine which broadcast domain is associated with the broadcast network packet.
The controller may identify switches that are coupled to the end hosts of a broadcast domain associated with a received broadcast network packet and control the identified switches to forward the broadcast network packet to the end hosts of the broadcast domain. For example, the controller may send control messages through network control paths to the identified switches. In this scenario, the control messages may include the broadcast network packet and instructions that direct the switches to forward the broadcast network packet to ports that are coupled to the end hosts of the associated broadcast domain.
Further features of the present invention, its nature and various advantages will be more apparent from the accompanying drawings and the following detailed description.
Networks such as the internet and the local and regional networks that are coupled to the internet rely on packet-based switches. These switches, which are sometimes referred to herein as network switches, packet processing systems, or packet forwarding systems can forward packets based on address information. In this way, data packets that are transmitted by a packet source may be delivered to a packet destination. In network terms, packet sources and destinations are sometimes referred to as end hosts. Examples of end hosts are personal computers, servers, and other computing equipment such as portable electronic devices that access the network using wired or wireless technologies.
Network switches range in capability from relatively small Ethernet switches and wireless access points to large rack-based systems that include multiple line cards, redundant power supplies, and supervisor capabilities. It is not uncommon for networks to include equipment from multiple vendors. Network switches from different vendors can be interconnected to form a packet forwarding network, but can be difficult to manage in a centralized fashion due to incompatibilities between their operating systems and control protocols.
These potential incompatibilities can be overcome by incorporating a common cross-platform control module (sometimes referred to herein as a controller client) into each network switch. A centralized cross-platform controller server may interact with each of the control clients over respective network links. The use of a cross-platform controller server and corresponding controller clients allows potentially disparate network switch equipment to be centrally managed.
With one illustrative configuration, which is sometimes described herein as an example, centralized control is provided by one or more controller servers such as controller server 18 of
In distributed controller arrangements, controller nodes can exchange information using an intra-controller protocol. For example, if a new end host connects to network hardware (e.g., a switch) that is only connected to a first controller node, that first controller node may use the intra-controller protocol to inform other controller nodes of the presence of the new end host. If desired, a switch or other network component may be connected to multiple controller nodes. Arrangements in which a single controller server is used to control a network of associated switches are sometimes described herein as an example.
Controller server 18 of
Controller server 18 may be used to implement network configuration rules 20. Rules 20 may specify which services are available to various network entities. As an example, rules 20 may specify which users (or type of users) in network 10 may access a particular server. Rules 20 may, for example, be maintained in a database at computing equipment 12.
Controller server 18 and controller clients 30 at respective network switches 14 may use network protocol stacks to communicate over network links 16.
Each switch (packet forwarding system) 14 may have input-output ports 34 (sometimes referred to as network switch interfaces). Cables may be used to connect pieces of equipment to ports 34. For example, end hosts such as personal computers, web servers, and other computing equipment may be plugged into ports 34. Ports 34 may also be used to connect one of switches 14 to other switches 14.
Packet processing circuitry 32 may be used in forwarding packets from one of ports 34 to another of ports 34 and may be used in performing other suitable actions on incoming packets. Packet processing circuit 32 may be implemented using one or more integrated circuits such as dedicated high-speed switch circuits and may serve as a hardware data path. If desired, packet processing software 26 that is running on control unit 24 may be used in implementing a software data path.
Control unit 24 may include processing and memory circuits (e.g., one or more microprocessors, memory chips, and other control circuitry) for storing and running control software. For example, control unit 24 may store and run software such as packet processing software 26, may store flow table 28, and may be used to support the operation of controller clients 30.
Controller clients 30 and controller server 18, may be compliant with a network switch protocol such as the OpenFlow protocol (see, e.g., OpenFlow Switch Specification version 1.0.0). One or more clients among controller clients 30 may also be compliant with other protocols (e.g., the Simple Network Management Protocol). Using the OpenFlow protocol or other suitable protocols, controller server 18 may provide controller clients 30 with data that determines how switch 14 is to process incoming packets from input-output ports 34.
With one suitable arrangement, flow table data from controller server 18 may be stored in a flow table such as flow table 28. The entries of flow table 28 may be used in configuring switch 14 (e.g., the functions of packet processing circuitry 32 and/or packet processing software 26). In a typical scenario, flow table 28 serves as cache storage for flow table entries and a corresponding version of these flow table entries is embedded within the settings maintained by the circuitry of packet processing circuitry 32. This is, however, merely illustrative. Flow table 28 may serve as the exclusive storage for flow table entries in switch 14 or may be omitted in favor of flow table storage resources within packet processing circuitry 32. In general, flow table entries may be stored using any suitable data structures (e.g., one or more tables, lists, etc.). For clarity, the data of flow table 28 (whether maintained in a database in control unit 24 or embedded within the configuration of packet processing circuitry 32) is referred to herein as forming flow table entries (e.g., rows in flow table 28).
The example of flow tables 28 storing data that determines how switch 14 is to process incoming packets are merely illustrative. If desired, any packet forwarding decision engine may be used in place of or in addition to flow tables 28 to assist packet forwarding system 14 to make decisions about how to forward network packets. As an example, packet forwarding decision engines may direct packet forwarding system 14 to forward network packets to predetermined ports based on attributes of the network packets (e.g., based on network protocol headers).
If desired, switch 14 may be implemented using a general purpose processing platform that runs control software and that omits packet processing circuitry 32 of
Network switches such as network switch 14 of
Another illustrative switch architecture that may be used in implementing network switch 14 of
With an arrangement of the type shown in
As shown in
Control protocol stack 56 serves as an interface between network protocol stack 58 and control software 54. Control protocol stack 62 serves as an interface between network protocol stack 60 and control software 64. During operation, when controller server 18 is communicating with controller client 30, control protocol stacks 56 generate and parse control protocol messages (e.g., control messages to activate a port or to install a particular flow table entry into flow table 28). By using arrangements of the type shown in
Flow table 28 contains flow table entries (e.g., rows in the table) that have multiple fields (sometimes referred to as header fields). The fields in a packet that has been received by switch 14 can be compared to the fields in the flow table. Each flow table entry may have associated actions. When there is a match between the fields in a packet and the fields in a flow table entry, the corresponding action for that flow table entry may be taken.
An illustrative flow table is shown in
The header fields in header 70 (and the corresponding fields in each incoming packet) may include the following fields: ingress port (i.e., the identity of the physical port in switch 14 through which the packet is being received), Ethernet source address, Ethernet destination address, Ethernet type, virtual local area network (VLAN) identification (sometimes referred to as a VLAN tag), VLAN priority, IP source address, IP destination address, IP protocol, IP ToS (type of service) bits, Transport source port/Internet Control Message Protocol (ICMP) Type (sometimes referred to as source TCP port), and Transport destination port/ICMP Code (sometimes referred to as destination TCP port). Other fields may be used if desired. For example, a network protocol field and a protocol port field may be used.
Each flow table entry (flow entry) is associated with zero or more actions that dictate how the switch handles matching packets. If no forward actions are present, the packet is preferably dropped. The actions that may be taken by switch 14 when a match is detected between packet fields and the header fields in a flow table entry may include the following actions: forward (e.g., ALL to send the packet out on all interfaces, not including the incoming interface, CONTROLLER to encapsulate and send the packet to the controller server, LOCAL to send the packet to the local networking stack of the switch, TABLE to perform actions in flow table 28, IN_PORT to send the packet out of the input port, NORMAL to process the packet with a default forwarding path that is supported by the switch using, for example, traditional level 2, VLAN, and level 3 processing, and FLOOD to flood the packet along the minimum forwarding tree, not including the incoming interface). Additional actions that may be taken by switch 14 include: an enqueue action to forward a packet through a queue attached to a port and a drop action (e.g., to drop a packet that matches a flow table entry with no specified action). Modify-field actions may also be supported by switch 14. Examples of modify-field actions that may be taken include: Set VLAN ID, Set VLAN priority, Strip VLAN header, Modify VLAN tag, Modify Ethernet source MAC (Media Access Control) address, Modify Ethernet destination MAC address, Modify IPv4 source address, Modify IPv4 ToS bits, Modify transport destination port.
The entry of the first row of the
The entry of the second row of table of
The third row of the table of
Flow table entries of the type shown in
Consider, as an example, a network that contains first and second switches connected in series between respective end hosts. When sending traffic from a first of the end hosts to a second of the end hosts, it may be desirable to route traffic through the first and second switches. If the second switch is connected to port 3 of the first switch, if the second end host is connected to port 5 of the second switch, and if the destination IP address of the second end host is 172.12.3.4, controller server 18 may provide the first switch with the flow table entry of
Illustrative steps that may be performed by switch 14 in processing packets that are received on input-output ports 34 are shown in
At step 80, switch 14 compares the fields of the received packet to the fields of the flow table entries in the flow table 28 of that switch to determine whether there is a match. Some fields in a flow table entry may contain complete values (i.e., complete addresses). Other fields may contain wildcards (i.e., fields marked with the “don't care” wildcard character of “*”). Yet other fields may have partially complete entries (i.e., a partial address that is partially wildcarded). Some fields may use ranges (e.g., by restricting a TCP port number to a value between 1 and 4096) and in effect use the range to implement a type of partial wildcarding. In making field-by-field comparisons between the received packet and the flow table entries, switch 14 can take into account whether or not each field in the flow table entry contains a complete value without any wildcarding, a partial value with wildcarding, or a wildcard character (i.e., a completely wildcarded field).
If it is determined during the operations of step 80 that there is no match between the fields of the packet and the corresponding fields of the flow table entries, switch 14 may send the packet to controller server 18 over link 16 (step 84).
If it is determined during the operations of step 80 that there is a match between the packet and a flow table entry, switch 14 may perform the action that is associated with that flow table entry and may update the counter value in the statistics field of that flow table entry (step 82). Processing may then loop back to step 78, so that another packet may be processed by switch 14, as indicated by line 86.
A controller (e.g., a controller server or other controllers implemented on computing equipment) may be used to control a network of switches. The controller may include one or more controller servers or may be distributed throughout one or more of the switches (e.g., portions of the controller may be implemented on storage and processing circuitry of multiple switches).
As shown in
Network 100 may include one or more controllers such as controller server 18. Controller server 18 may be used to control switches (e.g., switches SW1, SW2, SW3, etc.) via network paths 66. For example, controller server 18 may provide flow table entries to the switches over network paths 66. The example of
End hosts in the network can communicate with other end hosts by transmitting packets that are forwarded by switches in the network. For example, end host H1 may communicate with other end hosts by transmitting network packets to port P11 of switch SW5. In this scenario, switch SW5 may receive the network packets and forward the network packets along appropriate network paths (e.g., based on flow table entries that have been provided by controller server 18).
Switches such as switch SW5 may forward network packets based on information such as destination network addresses retrieved from network packets. For example, switch SW5 may retrieve destination Media Access Control (MAC) address information or other Ethernet address information from the network packets that identifies which end host(s) the network packets should be forwarded to. End hosts in the network may sometimes send broadcast packets that are destined for all other end hosts in the network. For example, end host H1 may send a broadcast packet by transmitting a network packet with a broadcast destination Ethernet address. In this scenario, switches in the network that receive the broadcast packet may identify the broadcast destination Ethernet address and forward the broadcast packet to all other end hosts in the network.
It may be desirable to isolate some of the end hosts from other end hosts by controlling which end hosts receive broadcast packets from any given end host. For example, isolating groups of end hosts from end hosts may improve network security (e.g., because end hosts in a first group may be prevented from communicating with end hosts in a second group). Controller server 18 may be used to partition network 100 into broadcast domains formed from groups of end hosts. Controller server 18 may control switches in network 100 so that network packets received from end hosts in a given broadcast domain are only forwarded to other end hosts in that broadcast domain, thereby isolating broadcast domains from each other.
Controller server 18 may partition network 100 into broadcast domains by forwarding broadcast network packets from an end host of a given broadcast domain to other end hosts of that broadcast domain through network control paths (e.g., network paths through controller server 18).
Controller server 18 may direct switches in network 100 to forward broadcast network packets that are received from end hosts to controller server 18. The switches may forward the broadcast network packets to controller server 18 via control paths such as paths 66. Controller server 18 may direct the switches to forward broadcast network packets by providing appropriate flow table entries to the switches.
As shown in
As an example, controller server 18 may provide each of switches SW1, SW2, SW3, SW4, and SW5 with flow table entry 216. In this scenario, the switches may use flow table entry 216 to identify broadcast network packets (e.g., network packets that have a broadcast destination Ethernet address) and forward the broadcast network packets to controller server 18.
Controller server 18 may process broadcast network packets (e.g., broadcast network packets that are forwarded to controller server 18 by switches) to determine which end hosts should receive the broadcast network packets. Controller server 18 may process a broadcast network packet by retrieving information from the broadcast network packet that may be used to identify which end host sent the broadcast network packet. For example, controller server 18 may retrieve information such as source Ethernet address information or other source information from header fields of the broadcast network packet. Controller server 18 may use the retrieved information to determine which broadcast domain is associated with the broadcast network packet (e.g., which broadcast domain is associated with the end host that sent the broadcast network packet).
Consider the scenario in which end host H1 sends a broadcast network packet to port P11 of switch SW5. In this scenario, switch SW5 may forward the broadcast network packet to controller server 18 (e.g., using flow table entries such as flow table entry 216 that have been provided to switch SW5). Controller server 18 may receive the broadcast network packet and retrieve the Ethernet address of end host H1 from the source Ethernet address field of the broadcast network packet. Based on the Ethernet address of end host H1, controller server 18 may identify a corresponding broadcast domain that is associated with end host H1. As an example, controller server 18 may maintain a database or list that identifies source information (e.g., Ethernet addresses) corresponding to each broadcast domain. In this scenario, controller server 18 may use the database to match the retrieved Ethernet address to a corresponding broadcast domain.
If desired, controller server 18 may identify a corresponding broadcast domain for a given broadcast network packet based on information transmitted along with the broadcast network packet (e.g., transmitted by the switch that forwarded the broadcast network packet to controller server 18). For example, in response to receiving a broadcast network packet at port P11, switch SW5 may forward the broadcast network packet to controller server 18 along with information identifying that the broadcast network packet was received at port P11 of switch SW5. In this scenario, controller server 18 may use the information to determine which end host and/or which broadcast domain is associated with the broadcast network packet (e.g., based on network topology information that identifies which end hosts are coupled to which ports).
Controller server 18 may forward a broadcast network packet received from an end host of a given broadcast domain to other end hosts of the broadcast domain (e.g., without forwarding the broadcast network packet to end hosts that are not associated with the broadcast domain). For example, controller server 18 may maintain a database of end host and broadcast domain information and use the database to determine which end hosts should receive the broadcast network packet. Controller server 18 may forward the broadcast network packet to appropriate end hosts by sending the broadcast network packet to switches that are coupled to the end hosts and directing the switches to forward the broadcast network packet to the end hosts. For example, the controller may send control messages that include the broadcast network packet and corresponding instructions for the switches.
As shown in
Packet out message 218 may be sent by a controller to switches using protocols such as the OpenFlow protocol (e.g., protocols that may be used to generate control paths such as control paths 66 between the controller and the switches). In scenarios such as when packet out message 218 is sent from a controller to switches via OpenFlow control paths (e.g., paths 66), packet out message 218 may be referred to as an OpenFlow control packet, because OpenFlow control packet 218 is sent using the OpenFlow protocol over OpenFlow control paths.
By performing broadcast domain isolation using controller server 18, network traffic associated with network packet broadcasting may be reduced and network performance may be improved. For example, a broadcast network packet sent from end host H7 and received by end host H5 via network control paths (e.g., network paths through controller server 18 and including control paths 66) may bypass switches SW1 and SW2, thereby reducing the load on switches SW1 and SW2. Network traffic associated with network packet broadcasting may be reduced because broadcast network packets are only forwarded to end hosts that are members of associated broadcast domains (e.g., end hosts that are not associated with the broadcast domain of a given broadcast network packet may not receive that broadcast network packet).
A network may be formed from switches or other packet forwarding systems that have controller clients (and therefore are controlled by a controller such as controller server 18) and switches that do not have controller clients (e.g., switches that are not controlled by a controller). The switches with controller clients may sometimes be referred to herein as client switches. The switches that do not have controller clients may sometimes be referred to herein as non-client switches.
Some of the client switches may be separated by one or more non-client switches. For example, client switch SW6 may be separated from client switch SW8 by non-client switch network 402. Non-client switch network 402 is shown in
It may be difficult for controller server 18 to control client switches in the network so that broadcast network packets are appropriately forwarded through network paths that include non-client switches. In particular, non-client switches such as switch SW7 may process broadcast network packets unpredictably (e.g., because the non-client switches are not controlled by controller server 18).
Consider the scenario in which a broadcast network packet is forwarded from client switch SW6 to client switch SW8 through non-client switch SW7. In this scenario, non-client switch SW7 may undesirably modify broadcast network packet (e.g., by modifying header fields of the broadcast network packet) or may block the broadcast network packet. For example, non-client switch SW7 may be a network router configured to block broadcast network packets between switches SW6 and SW8 (e.g., non-client switch SW7 may prevent broadcast network packets that are sent from client switch SW6 from reaching client switch SW8 and vice versa).
Controller server 18 may control the client switches to forward broadcast network packets through network control paths (e.g., network paths through controller server 18). By directing client switches to forward broadcast network packets through control paths, controller server 18 may bypass non-client switches such as non-client switch SW7.
In step 412, end host H8 may send a broadcast network packet such as broadcast network packet 214 of
In step 414, client switch SW6 may determine from the flow table entries that the broadcast network packet should be forwarded to controller server 18. Client switch SW6 may then forward the broadcast network packet to controller server 18. For example, client switch SW6 may match the broadcast destination address with the destination address field of flow table entry 216 and perform the corresponding action specified in the action field of flow table entry 216 (e.g., forward the broadcast network packet to controller server 18). If desired, client switch SW6 may forward the broadcast network packet along with information such as which port the broadcast network packet was received at (e.g., port P21).
In step 416, controller server 18 may receive the broadcast network packet from client switch SW6 and identify an associated broadcast domain. Controller server 18 may identify the associated broadcast domain based on information retrieved from the broadcast network packet (e.g., based on source information such as source Ethernet address information) or based on information such as port information received from client switch SW6. In the example of
In steps 418-1 and 418-2, controller server 18 may control client switches that are coupled to the end hosts of the broadcast domain to forward the broadcast network packet to the end hosts of the broadcast domain, excluding the end host from which the broadcast network packet originated (e.g., excluding end host H8). Controller server 18 may identify which client switches are coupled to the end hosts of the broadcast domain based on network topology information that indicates which end hosts are coupled to each of the client switches. Controller server 18 may control the client switches by forwarding the broadcast network packet to the client switches and directing the client switches to forward the broadcast network packet from ports that are coupled to the end hosts of the broadcast domain.
In the example of
The example of
In steps 420-1 and 420-2, client switches SW8 and SW9 may forward the broadcast network packet to end hosts H10 and H9, respectively. Client switches SW8 and SW9 may, for example, forward the broadcast network packet based on packet out messages received from controller server 18.
In step 502, the controller may partition the network into broadcast domains (e.g., subsets of end hosts in the network). The controller may partition the network based on information received from a user such as a system administrator. For example, the controller may partition the network based on information from a system administrator that identifies broadcast domains and corresponding end hosts. In this scenario, the information may include end host information such as network address information (e.g., hardware address information or protocol address information). The controller may store the information in a database or other desired forms of storage on the controller.
In step 504, the controller may receive a broadcast network packet. For example, the controller may receive a broadcast network packet from an end host via a client switch.
In step 506, the controller may identify which broadcast domain is associated with the received broadcast network packet. As an example, the controller may identify which end host sent the broadcast network packet by retrieving source address information from the broadcast network packet. In this scenario, the controller may determine which broadcast domain is associated with the identified end host based on information retrieved from a database (e.g., a database including broadcast domain information received from a user such as a system administrator). Step 416 of
In step 508, the controller may provide the broadcast network packet to switches that are coupled to end hosts of the identified broadcast domain and direct the switches to forward the broadcast network packet to the end hosts of the identified broadcast domain. Steps 418-1 and 418-2 of
The example of
The foregoing is merely illustrative of the principles of this invention and various modifications can be made by those skilled in the art without departing from the scope and spirit of the invention.
Number | Name | Date | Kind |
---|---|---|---|
4740954 | Cotton | Apr 1988 | A |
6147995 | Dobbins et al. | Nov 2000 | A |
6308218 | Vasa | Oct 2001 | B1 |
6839348 | Tang et al. | Jan 2005 | B2 |
7116681 | Hovell | Oct 2006 | B1 |
7120834 | Bishara | Oct 2006 | B1 |
7181674 | Cypher et al. | Feb 2007 | B2 |
7188191 | Hovell | Mar 2007 | B1 |
7512146 | Sivasankaran et al. | Mar 2009 | B1 |
7733859 | Takahashi et al. | Jun 2010 | B2 |
7792972 | Kamata et al. | Sep 2010 | B2 |
8160102 | Cha | Apr 2012 | B2 |
20040252680 | Porter | Dec 2004 | A1 |
20050216442 | Liskov et al. | Sep 2005 | A1 |
20080071927 | Lee | Mar 2008 | A1 |
20080130648 | Ra et al. | Jun 2008 | A1 |
20080189769 | Casado et al. | Aug 2008 | A1 |
20080239956 | Okholm et al. | Oct 2008 | A1 |
20080247395 | Hazard | Oct 2008 | A1 |
20080279196 | Friskney et al. | Nov 2008 | A1 |
20090003243 | Vaswani et al. | Jan 2009 | A1 |
20090067348 | Vasseur et al. | Mar 2009 | A1 |
20090086731 | Lee | Apr 2009 | A1 |
20090132701 | Snively | May 2009 | A1 |
20090234932 | Hamada | Sep 2009 | A1 |
20090265501 | Uehara et al. | Oct 2009 | A1 |
20090287837 | Felsher | Nov 2009 | A1 |
20090310582 | Beser et al. | Dec 2009 | A1 |
20100257263 | Casado et al. | Oct 2010 | A1 |
20100290465 | Ankaiah et al. | Nov 2010 | A1 |
20110069621 | Gintis et al. | Mar 2011 | A1 |
20110090911 | Hao et al. | Apr 2011 | A1 |
20110255540 | Mizrahi et al. | Oct 2011 | A1 |
20110296002 | Caram | Dec 2011 | A1 |
20110299528 | Yu | Dec 2011 | A1 |
20110299537 | Saraiya et al. | Dec 2011 | A1 |
20120039338 | Morimoto | Feb 2012 | A1 |
20120140637 | Dudkowski | Jun 2012 | A1 |
20120155467 | Appenzeller | Jun 2012 | A1 |
20120201169 | Subramanian et al. | Aug 2012 | A1 |
20120218997 | Shah | Aug 2012 | A1 |
20120287936 | Biswas et al. | Nov 2012 | A1 |
20120324068 | Jayamohan et al. | Dec 2012 | A1 |
20130034104 | Yedavalli | Feb 2013 | A1 |
20130058354 | Casado | Mar 2013 | A1 |
20130058358 | Fulton et al. | Mar 2013 | A1 |
20130070762 | Adams et al. | Mar 2013 | A1 |
20130132536 | Zhang et al. | May 2013 | A1 |
20130182722 | Vishveswaraiah et al. | Jul 2013 | A1 |
20130191537 | Ivanov et al. | Jul 2013 | A1 |
20130215769 | Beheshti-Zavareh | Aug 2013 | A1 |
Entry |
---|
Pfaff et al., OpenFlow Switch Specification, Dec. 31, 2009, 42 pages. |
McKeown et al., OpenFlow: Enabling Innovation in Campus Networks, Mar. 14, 2008, 6 pages. |
Cisco Systems, Cisco Catalyst 6500 Architecture, 1992-2007, 28 pages. |
Adams et al., U.S. Appl. No. 13/220,431, filed Aug. 29, 2011. |
Casado et al., “SANE: A Protection Architecture for Enterprise Networks,” Usenix Security, Aug. 2006 (15 pages). |
Casado et al., “Ethane: Taking Control of the Enterprise,” Conference of Special Interest Group on Data Communication (SIGCOMM), Japan, Aug. 2007 (12 pages). |
Koponen et al., “Onix: A Distributed Control Platform for Large-scale Production Networks,” Usenix Security, Oct. 2010 (14 pages). |
Sherwood et al. “FlowVisor: A Network Virtualization Layer,” Open Flow Technical Reports, Oct. 14, 2009 (Abstract and 14 pages) [Retrieved on Jan. 4, 2011]. Retrieved from the Internet:<URL: http://openflowswitch.org/downloads/technicalreports/openflow-tr-2009-1-flowvisor.pdf. |
Cisco Systems, “Scalable Cloud Network with Cisco Nexus 1000V Series Switches and VXLAN,” 2011 [Retrieved on Feb. 6, 2012]. Retrieved from the Internet: <URL:http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white—paper—c11-685115.pdf>. |
Sherwood et al., U.S. Appl. No. 13/367,256, filed Feb. 6, 2012. |