The data plane of a forwarding element in a network defines the way that packets will be forwarded by the forwarding element through the network. In some networks, the data plane is defined at the forwarding elements based on control plane data received from network controllers. The network controllers define a control plane for the forwarding elements based on a desired network state and distribute the control plane to the forwarding elements in order for the forwarding elements to implement the network state in their respective data planes. The forwarding elements forward data messages (e.g., Ethernet frames, Internet Protocol (IP) packets, Transmission Control Protocol (TCP) segments, User Datagram Protocol (UDP) datagrams, etc.) through the network based on their respective data planes, as defined according to the current network state.
Network controllers (like any other computing devices) may occasionally fail. At this point, a new network controller will take over the provision of control plane data to the forwarding element(s). Ideally, this failover should result in a minimum of churn (e.g., data plane recalculation) for the forwarding elements.
Some embodiments provide a network control system with techniques for handling failover of network controllers with minimal churn in the network state distributed to the forwarding elements of the network. The network control system of some embodiments includes (i) a cluster of centralized network controllers for managing the network state to be implemented on physical forwarding elements (e.g., hardware or software forwarding elements) of the network and (ii) local controllers that distribute the network state to the physical forwarding elements in a format understandable by the physical forwarding elements. In some embodiments, the centralized controllers distribute abstract network state data to the local controllers, which compute the understandable network state data and pass this understandable network state data to the physical forwarding elements. The local controllers, in some embodiments, each operate on the same physical machine as one of the physical forwarding elements.
In some embodiments, the network state maintained by the centralized network controllers defines logical networks for implementation in a distributed manner by the physical forwarding elements. Each logical network is defined by an administrator as a set of logical forwarding elements (e.g., logical switch, logical router) that logically connect a set of end machines. Each logical network or logical forwarding element is then defined as a set of data tuples (or data records) by a particular centralized controller that manages the particular logical network (or logical forwarding element).
The centralized controller distributes these abstract data tuples to the local controllers that manage the forwarding elements that will implement the logical network. In some embodiments, the end machines (e.g., virtual machines) of the logical network are distributed through the physical network on various host machines, and each forwarding element to which one of these end machines connects (e.g., a software virtual switch that operates on the same physical machine as the end machine) implements the logical network. Thus, each of the local controllers for these forwarding elements receives the abstract data tuples and computes output network state data to provide to its respective forwarding element.
In some embodiments, each local controller that manages a physical forwarding element (referred to herein as a managed forwarding element) receives input network state data entries (the abstract data tuples) and computes output network state data entries (the data tuples translated into a format understandable by the managed forwarding element). This output network state data serves as the control plane data for the managed forwarding element, defining the operation of its data plane. These output network state data entries define forwarding behaviors of the managed forwarding elements, and may also instruct the managed forwarding elements to create and tear down tunnels, configure network constructs (e.g., ports, port queues, etc.).
In some instances, the local controller loses a connection with the centralized network controller that provides the input network state data entries for a particular logical network. The local controller can lose the connection with the centralized network controller when the centralized network controller fails or restarts, when network connectivity with the centralized network controller is lost, etc. While in some cases, the primary centralized network controller is able to quickly recover and re-establish a connection with the local controller, in general after a primary centralized network controller disconnects from the local controller (e.g., due to failure of the centralized network controller, network issues, etc.), a secondary (or backup) centralized network controller takes over as the new primary controller for the particular logical network. This new primary controller provides a new version of the input network state data entries for the input state to a local controller for generating new output network state data entries.
In many cases, the new version of the input state data entries is similar, if not identical, to the previous version of the input state data entries. As such, new output network state data entries generated from the new version of the input state data entries would also be similar or identical to the existing output network state data entries. However, when a new primary centralized network controller takes over responsibility for a particular logical network, the new primary centralized network controller may initially provide the local controllers with an empty set of input network state data entries for the logical network. In such cases, tearing down the existing network state (i.e., the output network state data entries) and rebuilding it from the newly received input state data entries introduces unnecessary churn into the system, forcing (i) the local controller to recalculate largely the same output network state data entries that it already has and (ii) the managed forwarding element to reinstall the same control plane and recompute its data plane behavior. This churn may affect the availability of the network and may create delays in propagating updates of the network state to the physical network elements.
Thus, some embodiments of the invention provide different methods for reducing this churn while maintaining a consistent network state for a set of managed forwarding elements. Specifically, in some embodiments, the local controller designates a waiting period before computing output network state data entries based on the new version of the input network state data entries. Alternatively, or conjunctively, the local controller of some embodiments calculates the changes between the new version of input state data entries and its stored existing version of the input state data entries, and only generates new output network state data entries based on the calculated changes, in order to minimize unnecessary recalculations of the output network state data entries. The new output network state data entries may then be used by the local controller to provision its managed forwarding element.
Upon receiving an initial indication from the new primary centralized network controller that a full network state has been sent to a local controller, the local controller of some embodiments begins a timed waiting period (e.g., 30 seconds, 1 minute, 5 minutes, etc.) to receive additional updates from the new primary centralized network controller. Only after completion of the timed waiting period does the local controller compute the new output state to provide control plane data to its managed forwarding element.
In various embodiments, this waiting period may be a predetermined length of time, or may be determined based on a size of the network, a comparison between the new input network state data entries and the existing input network state data entries, etc. In addition, the local controller processes different portions of the new input network state data differently with regards to the timed waiting period. For example, some embodiments use a shortened waiting period (or no waiting period at all) for additions to the output network state data, but will provide a longer waiting period before deleting portions of the output network state data.
The local controller may receive additional updates to the new input network state data entries during the waiting period, allowing the controller to incorporate these updates before modifying the output network state data entries based on the new input network state data entries. Once the waiting period elapses, the local controller generates new output network state data entries based on the new input network state data entries, including any updates received during the waiting period. These output network state data entries are then provided to the managed forwarding element that the local controller manages, enabling the managed forwarding element to modify its state.
In addition to, or instead of using the waiting period, the local controller of some embodiments calculates differences between the new version of the input state and an existing version of the input state prior to generating a new output state, in order to avoid unnecessary recalculations of the state. Upon detecting that the connection with the initial primary centralized network controller has failed and that control has switched over to a secondary centralized network controller, the local controller marks all of the existing input network state data entries for deletion.
In some embodiments, the local controller marks the existing input network state data entries for deletion using shadow tables. In order to mark the input network state data entries for deletion, the local controller of some embodiments stores a set of entries that indicate the input network state data entries to be deleted in a set of shadow tables before applying the changes (i.e., deleting the network state data entries) to the active input and output states.
Once the existing input state has been marked for deletion, the local controller of some embodiments compares the new input network state data entries with the existing input network state data entries to identify (i) network state data entries of the new input network state data entries that match with existing input network state data entries, (ii) stale network state data entries of the existing input network state data entries that have no corresponding entry in the new input network state data entries, and (iii) new data entries of the new input network state data entries that have no corresponding portion in the existing input network state data entries.
The local controller of some embodiments then unmarks from deletion the existing input network state data entries that match with new input network state data entries (while also removing the corresponding entries from the new input network state data entries), so that the corresponding output network state data entries will not be deleted. The local controller of some embodiments then adds the new input network state data entries to the existing input state data and calculates new output state data based on the new input network state data entries. Finally, the local controller of some embodiments removes the stale input network state data entries and the corresponding stale output network state data entries. In this manner, generating the new output network state data entries does not require the recalculation of the output network state data entries that overlap between the new and existing network state data entries. The new output network state data entries may then be used by the local controller to provision its managed forwarding element.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing.
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments provide a network control system with techniques for handling failover of network controllers with minimal churn in the network state distributed to the forwarding elements of the network. The network control system of some embodiments includes (i) a cluster of centralized network controllers for managing the network state to be implemented on physical forwarding elements (e.g., hardware or software forwarding elements) of the network and (ii) local controllers that distribute the network state to the physical forwarding elements in a format understandable by the physical forwarding elements. In some embodiments, the centralized controllers distribute abstract network state data to the local controllers, which compute the understandable network state data and pass this understandable network state data to the physical forwarding elements. The local controllers, in some embodiments, each operate on the same physical machine as one of the physical forwarding elements.
In some embodiments, the network state maintained by the centralized network controllers defines logical networks for implementation in a distributed manner by the physical forwarding elements. Each logical network is defined by an administrator as a set of logical forwarding elements (e.g., logical switch, logical router) that logically connect a set of end machines. Each logical network or logical forwarding element is then defined as a set of data tuples (or data records) by a particular centralized controller that manages the particular logical network (or logical forwarding element).
The centralized controller distributes these abstract data tuples to the local controllers that manage the forwarding elements that will implement the logical network. In some embodiments, the end machines (e.g., virtual machines) of the logical network are distributed through the physical network on various host machines, and each forwarding element to which one of these end machines connects (e.g., a software virtual switch that operates on the same physical machine as the end machine) implements the logical network. Thus, each of the local controllers for these forwarding elements receives the abstract data tuples and computes output network state data to provide to its respective forwarding element.
In some embodiments, each local controller that manages a physical forwarding element (referred to herein as a managed forwarding element) receives input network state data entries (the abstract data tuples) and computes output network state data entries (the data tuples translated into a format understandable by the managed forwarding element). This output network state data serves as the control plane data for the managed forwarding element, defining the operation of its data plane. These output network state data entries define forwarding behaviors of the managed forwarding elements, and may also instruct the managed forwarding elements to create and tear down tunnels, configure network constructs (e.g., ports, port queues, etc.).
In some instances, the local controller loses a connection with the centralized network controller that provides the input network state data entries for a particular logical network. The local controller can lose the connection with the centralized network controller when the centralized network controller fails or restarts, when network connectivity with the centralized network controller is lost, etc. While in some cases, the primary centralized network controller is able to quickly recover and re-establish a connection with the local controller, in general after a primary centralized network controller disconnects from the local controller (e.g., due to failure of the centralized network controller, network issues, etc.), a secondary (or backup) centralized network controller takes over as the new primary controller for the particular logical network. This new primary controller provides a new version of the input network state data entries for the input state to a local controller for generating new output network state data entries.
In many cases, the new version of the input state data entries is similar, if not identical, to the previous version of the input state data entries. As such, new output network state data entries generated from the new version of the input state data entries would also be similar or identical to the existing output network state data entries. However, when a new primary centralized network controller takes over responsibility for a particular logical network, the new primary centralized network controller may initially provide the local controllers with an empty set of input network state data entries for the logical network. In such cases, tearing down the existing network state (i.e., the output network state data entries) and rebuilding it from the newly received input state data entries introduces unnecessary churn into the system, forcing (i) the local controller to recalculate largely the same output network state data entries that it already has and (ii) the managed forwarding element to reinstall the same control plane and recompute its data plane behavior. This churn may affect the availability of the network and may create delays in propagating updates of the network state to the physical network elements.
Thus, some embodiments of the invention provide different methods for reducing this churn while maintaining a consistent network state for a set of managed forwarding elements. Specifically, in some embodiments, the local controller designates a waiting period before computing output network state data entries based on the new version of the input network state data entries. Alternatively, or conjunctively, the local controller of some embodiments calculates the changes between the new version of input state data entries and its stored existing version of the input state data entries, and only generates new output network state data entries based on the calculated changes, in order to minimize unnecessary recalculations of the output network state data entries. The new output network state data entries may then be used by the local controller to provision its managed forwarding element.
Upon receiving an initial indication from the new primary centralized network controller that a full network state has been sent to a local controller, the local controller of some embodiments begins a timed waiting period (e.g., 30 seconds, 1 minute, 5 minutes, etc.) to receive additional updates from the new primary centralized network controller. Only after completion of the timed waiting period does the local controller compute the new output state to provide control plane data to its managed forwarding element.
In various embodiments, this waiting period may be a predetermined length of time, or may be determined based on a size of the network, a comparison between the new input network state data entries and the existing input network state data entries, etc. In addition, the local controller processes different portions of the new input network state data differently with regards to the timed waiting period. For example, some embodiments use a shortened waiting period (or no waiting period at all) for additions to the output network state data, but will provide a longer waiting period before deleting portions of the output network state data.
The local controller may receive additional updates to the new input network state data entries during the waiting period, allowing the controller to incorporate these updates before modifying the output network state data entries based on the new input network state data entries. Once the waiting period elapses, the local controller generates new output network state data entries based on the new input network state data entries, including any updates received during the waiting period. These output network state data entries are then provided to the managed forwarding element that the local controller manages, enabling the managed forwarding element to modify its state.
In addition to, or instead of using the waiting period, the local controller of some embodiments calculates differences between the new version of the input state and an existing version of the input state prior to generating a new output state, in order to avoid unnecessary recalculations of the state. Upon detecting that the connection with the initial primary centralized network controller has failed and that control has switched over to a secondary centralized network controller, the local controller marks all of the existing input network state data entries for deletion.
In some embodiments, the local controller marks the existing input network state data entries for deletion using shadow tables. In order to mark the input network state data entries for deletion, the local controller of some embodiments stores a set of entries that indicate the input network state data entries to be deleted in a set of shadow tables before applying the changes (i.e., deleting the network state data entries) to the active input and output states.
Once the existing input state has been marked for deletion, the local controller of some embodiments compares the new input network state data entries with the existing input network state data entries to identify (i) network state data entries of the new input network state data entries that match with existing input network state data entries, (ii) stale network state data entries of the existing input network state data entries that have no corresponding entry in the new input network state data entries, and (iii) new data entries of the new input network state data entries that have no corresponding portion in the existing input network state data entries.
The local controller of some embodiments then unmarks from deletion the existing input network state data entries that match with new input network state data entries (while also removing the corresponding entries from the new input network state data entries), so that the corresponding output network state data entries will not be deleted. The local controller of some embodiments then adds the new input network state data entries to the existing input state data and calculates new output state data based on the new input network state data entries. Finally, the local controller of some embodiments removes the stale input network state data entries and the corresponding stale output network state data entries. In this manner, generating the new output network state data entries does not require the recalculation of the output network state data entries that overlap between the new and existing network state data entries. The new output network state data entries may then be used by the local controller to provision its managed forwarding element.
As described above, the network state maintained by the centralized network controllers of some embodiments defines logical networks for implementation in a distributed manner by the physical forwarding elements.
The physical network 102 includes a centralized network controller 115 and hosts 120 and 0125. Host 120 includes a local controller 130, a managed forwarding element 140, and VMs 1-3. Host 125 includes a local controller 135, a managed forwarding element 145, and VM 4. The centralized network controller 115 sends data 180 and 185 to the local controllers 130 and 135 respectively.
The data 180 and 185 of some embodiments includes input network state data entries (e.g., data tuples, etc.) for the local controllers 130 and 135. In this example, data 180 includes input network state data entries A, B, and C, while data 185 includes input network state data entries A and D. As shown in this example, the local controllers 130 and 135 may receive different portions of the input network state data depending on the portions required by each associated local controller.
The local controllers 130 and 135 of some embodiments process the input network state data entries 150 and 155 received from the centralized network controllers to generate output network state data entries. In some embodiments, the output network state data 170 and 175 is control plane data for managing the control plane of the managed forwarding elements 140 and 145 by modifying the way data messages are transmitted between VMs 1-4.
In some embodiments, the local controllers 130 and 135 generate the output network state data entries 170 and 175 to be understandable to different types of managed forwarding elements. The managed forwarding elements 140 and 145 of some embodiments include several different types of managed forwarding elements (e.g., hardware forwarding elements, Open vSwitch (OVS), VMWare™ ESX Server, etc.) that are managed in different ways (e.g., flow entries, configuration instructions, etc.).
Certain types of managed forwarding elements use flow entries that are stored in forwarding tables of the managed forwarding elements. The flow entries define rules, or forwarding behaviors, for the managed forwarding element. The forwarding behaviors determine the way that packets, or data messages, are forwarded through the managed forwarding element. Each flow entry includes a set of conditions to be matched by a packet header and a set of actions (e.g., drop, forward, modify, etc.) to perform on a packet that matches the set of conditions.
Finally,
In the example of
I. Waiting Period
Some embodiments provide a method that reduces churn in a system after receiving new input state by using a waiting period.
In the first stage 201, local controller 130 receives input network state data entries 150 from a primary centralized network controller 115. The local controller 130 processes the input network state data entries 150 using an engine 160 to generate output network state data entries 170.
In this example, the input network state data entries 150 include entries A, B, and C, while the output network state data entries 170 include entries A′, B′, and C′ to represent that A′, B′, and C′ are the output network state data entries that result from the processing of input network state data entries A, B, and C respectively. Although this example is shown with a one-to-one relationship between the input and output network state data entries, in some embodiments a single input state data entry may result in multiple output network state data entries or vice versa. In some embodiments, the input network state data entries 150 represent an abstract definition (e.g., data tuples) of the network state that is not specific to any of the physical elements of the physical network. The output network state data entries 170 represent control plane data (e.g., flow entries, configuration instructions, etc.) that is provided to the managed forwarding elements (not shown) of the physical network. The managed forwarding elements process the control plane data to modify the data plane of the managed forwarding elements and to implement the network state defined by the controllers.
The second stage 202 shows that the local controller 130 has lost the connection to the primary centralized network controller 115. In addition, the second stage 202 shows that, upon detecting the disconnect, the secondary (or backup) centralized controller 218 takes over as the new primary centralized controller and sends a new set of input network state data 280 to local controller 130. In some embodiments, local controller 130 detects the disconnect and sends a request to the new primary centralized network controller 218 to send the new input network state data.
In the third stage 203, local controller 130 has received the new input network state data 280 as a single transaction 250. The new primary centralized network controller 218 sends the new input network state data 280 to the local controller 130 with (i) a begin message, signaling the beginning of a synchronization transaction, (ii) a complete version of the state (an empty set in this example), and (iii) an end message, signaling the end of the synchronization transaction.
In this example, the received network state data 250 does not contain any input network state data entries. This can result when the secondary controller 218 does not constantly maintain the necessary state for the local controller 130, but rather needs to collect the state from other centralized network controllers (not shown). Rather than tearing down the existing output network state data entries 170 and rebuilding an empty output state, the local controller 130 sets a waiting period 290 to wait for additional updates to the input network state data entries 250 before applying the new input network state data entries 250 to the active network state. If an incorrect or incomplete version of the network state processed and propagated to the managed forwarding elements, this may result in outages or errors for the data plane of the network.
The fourth stage 204 shows that the new primary centralized network controller 218 sends an update 285 (with new input network state data entries A, B, and D) to local controller 130. The waiting period 290 has not yet expired, so the local controller 130 has maintained the existing input and output network state data entries 150 and 170.
In the fifth stage 205, the waiting period 290 has expired and local controller 130 has loaded the new input network state data entries 250 and the received updates 285 as the input network state data entries 150. The local controller 130 has also generated new output network state data entries 170 (A′, B′, and D′) based on the updated input network state data entries 150. Finally, the sixth stage 206 shows that local controller 130 propagates the generated output network state data entries 170 to managed forwarding element 140 to modify the forwarding behaviors of managed forwarding element 140.
The process 300 then receives (at 310) new input state. In some embodiments, the local controller establishes a new connection to a new centralized network controller. The local controller in some embodiments maintains a secondary connection to a secondary centralized network controller, which takes over the responsibilities of the primary centralized network controller to become the new primary centralized network controller.
After receiving (at 310) the new input state, the process 300 determines (at 315) whether the new input state is sufficient. The new input state may be insufficient when a new primary centralized network controller does not have an up-to-date version of the state. For example, in some cases, a new primary centralized network controller does not maintain the entire network state and has to wait for other centralized network controllers in the system to provide data regarding the current state of the network before it is able to provide current network state data to the local controllers. In some such embodiments, process 300 determines (at 315) that a new input state is sufficient as long as the new input state is not an empty state.
Alternatively or conjunctively, the process 300 of some embodiments determines (at 315) whether new input state is sufficient based on a comparison between the existing input state and the new input state. For example, in some embodiments, the process 300 determines (at 315) that the new input state is sufficient as long as the size of the new input state is within a certain percentage (e.g., +/−10%) of the existing input state.
When the process 300 determines (at 315) that the new input state is sufficient, the process 300 transitions to 340, which will be described further below. Otherwise, the process 300 transitions to 320. At 320, the process 300 of some embodiments determines a waiting period for implementing the changes of the new input state.
The waiting period allows a local controller to receive additional updates to the input state and to avoid making unnecessary changes to the output state due to incomplete state data. The process 300 of some embodiments determines (at 320) the waiting period based on the size of the network for which the centralized network controllers manage state data. For example, in some embodiments, the waiting period is calculated based on an estimated amount of time required for the centralized network controllers to calculate and synchronize the network state data throughout the network. In some of these embodiments, the process 300 determines (at 320) the amount of time necessary for a full synchronization based on a number of network elements (e.g., forwarding elements, ports, access control lists (ACLs), etc.) in the network. In some embodiments, rather than calculating the waiting period directly, the process 300 receives a value for the waiting period from a centralized network controller (e.g., 115 or 218) of the centralized network controller cluster.
Alternatively, or conjunctively, the process 300 determines (at 320) the length of the waiting period based on an analysis of the new input network state data entries received from the centralized network controller. For example, in some embodiments, the length of the waiting period depends on a comparison of a size of the received new input network state data with a size of the existing input network state data, or is based on a size of the logical network. In other cases, the process 300 only uses a waiting period when the new input state is empty, indicating that the new controller has not yet been updated with a desired network state.
The process 300 then receives (at 325) updates to the input state from the new primary centralized network controller. In some embodiments, unlike the new input state received at 310, the updates received from the new primary centralized network controller do not represent the entire state for the local controller, but only modifications made to the state since a previous update (or synchronization) from the centralized network controller.
The process 300 then determines (at 330) whether the waiting period has expired. When the waiting period has not yet expired, the process 300 transitions back to 325. Once the waiting period has expired, the process 300 incorporates (at 335) the updates received during the waiting period into the new input state received at 310.
The process 300 then generates (at 340) new output state based on the new input state and any updates received during the waiting period. The process 300 of some embodiments then uses the new output state to modify forwarding behaviors of managed forwarding elements to implement the new network state.
II. Computing Output State Based on Differences in Input State
In some embodiments, in addition to or instead of calculating waiting periods, the local controller generates new output state based on differences between the new version of the input state and an existing version of the input state, in order to avoid unnecessary recalculations and unavailability of the state.
The first stage 401 shows that local controller 130 receives input network state data entries 150 from primary centralized network controller 115 and processes the input network state data entries 150 using engine 160 to generate output network state data entries 170.
In some embodiments, the local controller 130 stores the input network state data entries 150 received from the centralized network controller 115 in a set of input tables and generates the output network state data entries 170 by processing the received input network state data entries 150 to create output network state data entries 170 in a set of output tables. The engine 160 of some embodiments processes the input network state data entries 150 by performing a series of table joins on the set of input tables to generate the set of output tables with the output network state data entries 170.
The second stage 402 shows that the local controller 130 has lost the connection to primary centralized network controller 115 and established a new connection to secondary centralized network controller 218, which takes over as the new primary centralized network controller. In addition, the second stage 202 shows that, upon detecting the disconnect, the local controller 130 marks all of the existing input network state data entries 150 (i.e., A, B, and C) for deletion. In this example, the local controller 130 marks the existing input network state data entries 150 for deletion by adding entries to a shadow table 450 to delete the input network state data entries A, B, and C.
The third stage 403 shows that the new primary centralized network controller 218 sends new input network state data entries 480 (i.e., A, B, and D) to the local controller 130. In this example, network state data entries A and B have corresponding entries in the existing input network state data entries 150. Input network state data entry D is a new entry that has no matching entry in the existing input network state data entries 150 and input state data entry C of the existing input network state data entries 150 is a stale input state data entry with no matching input state data entry in the new input network state data entries 455.
In the fourth stage 404, the local controller 130 unmarks the matching input network state data entries by removing the entries for deleting entries A and B from shadow table 450. The fourth stage 404 also shows that an entry to “Add D” has been added to the shadow table 450.
In the fifth stage 405, the entries to delete A and B have been removed from the shadow table 450 and the entry to “Add D” in shadow table 450 has been processed. Input state data entry “D” has been added to the existing input tables 150 and new output entry D′ has been created in the output tables 170. In this example, entries for adding new entries from the new input network state data entries are processed before processing any entries for deleting existing state data. This ensures that necessary existing state is not torn down before the new state is built up.
In the sixth stage 406, local controller 130 processes the entry in shadow table 450 to “Delete C” from existing input network state data entries 150. The change is then propagated through to the output network state data entries 170 by engine 160. In some embodiments, rather than propagating the deleted entries through engine 160, local controller 130 directly deletes network state data entries from both the input network state data entries 150 and the output network state data entries 170, without recalculating the output network state data entries. Finally, the seventh stage 407 shows that local controller 130 propagates the generated output network state data entries 170 to managed forwarding element 140 to modify the forwarding behaviors of managed forwarding element 140.
Upon detecting (at 505) that the local controller has disconnected, the process 500 then marks (at 510) all of the existing input state for deletion. In some embodiments, the process 500 marks (at 510) the existing input network state data entries for deletion using shadow tables. In order to mark the input network state data entries for deletion, the local controller of some embodiments stores a set of entries that indicate the input network state data entries to be deleted in a set of shadow tables before applying the changes (i.e., deleting the network state data entries) to the active input and output states.
Once the existing input network state data entries are marked for deletion, the process 500 receives (at 515) new input network state data entries from a new primary centralized network controller.
At 520, the process 500 determines whether the new input state data includes any duplicate input network state data entries or input network state data entries that have matching network state data entries in the existing input state data. When the process 500 determines (at 520) that the new input state data includes matching input network state data entries, the process 500 unmarks (at 525) the matching input network state data entries in the existing input state data so that they are no longer marked for deletion. In some embodiments, the process 500 unmarks (at 525) the matching input network state data entries by removing entries corresponding to the matching input network state data entries from the shadow tables described above.
When the process 500 determines (at 520) that the new input state data does not include matching input network state data entries, or the matching network state data entries have been unmarked (at 525), the process 500 determines (at 530) whether the new input state data includes any input network state data entries that do not have matching input network state data entries in the existing input network state data entries. When the process 500 determines (at 530) that the new input state data does include new input network state data entries, the process 500 adds (at 535) the new input network state data entries to the existing input network state data entries.
When the process 500 determines (at 530) that the new input state data does not include any new input network state data entries, or the new input network state data entries have been added (at 535), the process 500 deletes (at 540) the existing input network state data entries that are still marked for deletion.
In some embodiments, like the process 300 described above with reference to
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
III. Electronic System
The bus 605 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal machines of the computer system 600. For instance, the bus 605 communicatively connects the processing unit(s) 610 with the read-only memory 630, the system memory 625, and the permanent storage machine 635.
From these various memory units, the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 630 stores static data and instructions that are needed by the processing unit(s) 610 and other modules of the computer system. The permanent storage machine 635, on the other hand, is a read-and-write memory machine. This machine is a non-volatile memory unit that stores instructions and data even when the computer system 600 is off. Some embodiments of the invention use a mass-storage machine (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage machine 635.
Other embodiments use a removable storage machine (such as a floppy disk, flash drive, etc.) as the permanent storage machine. Like the permanent storage machine 635, the system memory 625 is a read-and-write memory machine. However, unlike storage machine 635, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 625, the permanent storage machine 635, and/or the read-only memory 630. From these various memory units, the processing unit(s) 610 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 605 also connects to the input and output machines 640 and 645. The input machines enable the user to communicate information and select commands to the computer system. The input machines 640 include alphanumeric keyboards and pointing machines (also called “cursor control machines”). The output machines 645 display images generated by the computer system. The output machines include printers and display machines, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include machines such as a touchscreen that function as both input and output machines.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological machines. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic machine. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, this specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.
Hypervisor kernel network interface module, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.
One of ordinary skill in the art will recognize that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.
A number of the figures (e.g.,
Number | Name | Date | Kind |
---|---|---|---|
5426774 | Banerjee et al. | Jun 1995 | A |
5504921 | Dev et al. | Apr 1996 | A |
5550816 | Hardwick et al. | Aug 1996 | A |
5729685 | Chatwani et al. | Mar 1998 | A |
5751967 | Raab et al. | May 1998 | A |
5796936 | Watabe et al. | Aug 1998 | A |
5805791 | Grossman et al. | Sep 1998 | A |
6055243 | Vincent et al. | Apr 2000 | A |
6104699 | Holender et al. | Aug 2000 | A |
6219699 | McCloghrie et al. | Apr 2001 | B1 |
6366582 | Nishikado et al. | Apr 2002 | B1 |
6512745 | Abe et al. | Jan 2003 | B1 |
6539432 | Taguchi et al. | Mar 2003 | B1 |
6680934 | Cain | Jan 2004 | B1 |
6768740 | Perlman et al. | Jul 2004 | B1 |
6785843 | McRae et al. | Aug 2004 | B1 |
6862263 | Simmons | Mar 2005 | B1 |
6941487 | Balakrishnan et al. | Sep 2005 | B1 |
6963585 | Le Pennec et al. | Nov 2005 | B1 |
6999454 | Crump | Feb 2006 | B1 |
7042912 | Ashwood Smith et al. | May 2006 | B2 |
7046630 | Abe et al. | May 2006 | B2 |
7096228 | Theimer et al. | Aug 2006 | B2 |
7120728 | Krakirian et al. | Oct 2006 | B2 |
7126923 | Yang et al. | Oct 2006 | B1 |
7197572 | Matters et al. | Mar 2007 | B2 |
7200144 | Terrell et al. | Apr 2007 | B2 |
7209439 | Rawlins et al. | Apr 2007 | B2 |
7263290 | Fortin et al. | Aug 2007 | B2 |
7283473 | Arndt et al. | Oct 2007 | B2 |
7286490 | Saleh et al. | Oct 2007 | B2 |
7342916 | Das et al. | Mar 2008 | B2 |
7343410 | Mercier et al. | Mar 2008 | B2 |
7359971 | Jorgensen | Apr 2008 | B2 |
7450598 | Chen et al. | Nov 2008 | B2 |
7460482 | Pike | Dec 2008 | B2 |
7478173 | Delco | Jan 2009 | B1 |
7483370 | Dayal et al. | Jan 2009 | B1 |
7512744 | Banga et al. | Mar 2009 | B2 |
7555002 | Arndt et al. | Jun 2009 | B2 |
7606260 | Oguchi et al. | Oct 2009 | B2 |
7627692 | Pessi | Dec 2009 | B2 |
7649851 | Takashige et al. | Jan 2010 | B2 |
7710874 | Balakrishnan et al. | May 2010 | B2 |
7730486 | Herington | Jun 2010 | B2 |
7764599 | Doi et al. | Jul 2010 | B2 |
7792987 | Vohra et al. | Sep 2010 | B1 |
7818452 | Matthews et al. | Oct 2010 | B2 |
7826482 | Minei et al. | Nov 2010 | B1 |
7839847 | Nadeau et al. | Nov 2010 | B2 |
7885276 | Lin | Feb 2011 | B1 |
7903666 | Kumar et al. | Mar 2011 | B1 |
7929424 | Kochhar et al. | Apr 2011 | B2 |
7936770 | Frattura et al. | May 2011 | B1 |
7937438 | Miller et al. | May 2011 | B1 |
7948986 | Ghosh et al. | May 2011 | B1 |
7953865 | Miller et al. | May 2011 | B1 |
7991859 | Miller et al. | Aug 2011 | B1 |
7995483 | Bayar et al. | Aug 2011 | B1 |
8010696 | Sankaran et al. | Aug 2011 | B2 |
8027354 | Portolani et al. | Sep 2011 | B1 |
8031633 | Bueno et al. | Oct 2011 | B2 |
8046456 | Miller et al. | Oct 2011 | B1 |
8054832 | Shukla et al. | Nov 2011 | B1 |
8055789 | Richardson et al. | Nov 2011 | B2 |
8060779 | Beardsley et al. | Nov 2011 | B2 |
8060875 | Lambeth | Nov 2011 | B1 |
8089871 | Iloglu et al. | Jan 2012 | B2 |
8131852 | Miller et al. | Mar 2012 | B1 |
8144630 | Orr | Mar 2012 | B1 |
8149737 | Metke et al. | Apr 2012 | B2 |
8155028 | Abu-Hamdeh et al. | Apr 2012 | B2 |
8166201 | Richardson et al. | Apr 2012 | B2 |
8199750 | Schultz et al. | Jun 2012 | B1 |
8223668 | Allan et al. | Jul 2012 | B2 |
8224931 | Brandwine et al. | Jul 2012 | B1 |
8224971 | Miller et al. | Jul 2012 | B1 |
8230050 | Brandwine et al. | Jul 2012 | B1 |
8239572 | Brandwine et al. | Aug 2012 | B1 |
8265075 | Pandey | Sep 2012 | B2 |
8312129 | Miller et al. | Nov 2012 | B1 |
8320388 | Louati et al. | Nov 2012 | B2 |
8321561 | Fujita et al. | Nov 2012 | B2 |
8339959 | Moisand et al. | Dec 2012 | B1 |
8339994 | Gnanasekaran et al. | Dec 2012 | B2 |
8351418 | Zhao et al. | Jan 2013 | B2 |
8422359 | Nakajima | Apr 2013 | B2 |
8456984 | Ranganathan et al. | Jun 2013 | B2 |
8504718 | Wang et al. | Aug 2013 | B2 |
8565108 | Marshall et al. | Oct 2013 | B1 |
8578003 | Brandwine et al. | Nov 2013 | B2 |
8612627 | Brandwine | Dec 2013 | B1 |
8621058 | Eswaran et al. | Dec 2013 | B2 |
8644188 | Brandwine et al. | Feb 2014 | B1 |
8705513 | Van Der Merwe et al. | Apr 2014 | B2 |
8762501 | Kempf et al. | Jun 2014 | B2 |
8958298 | Zhang et al. | Feb 2015 | B2 |
8959215 | Koponen et al. | Feb 2015 | B2 |
9007903 | Koponen et al. | Apr 2015 | B2 |
9083609 | Casado et al. | Jul 2015 | B2 |
9124538 | Koponen et al. | Sep 2015 | B2 |
9137107 | Koponen et al. | Sep 2015 | B2 |
9154433 | Koponen et al. | Oct 2015 | B2 |
9172603 | Padmanabhan et al. | Oct 2015 | B2 |
9178833 | Koponen et al. | Nov 2015 | B2 |
9203701 | Koponen et al. | Dec 2015 | B2 |
9306843 | Koponen et al. | Apr 2016 | B2 |
9391880 | Koide | Jul 2016 | B2 |
20010043614 | Viswanadham et al. | Nov 2001 | A1 |
20010044825 | Barritz | Nov 2001 | A1 |
20020034189 | Haddock et al. | Mar 2002 | A1 |
20020093952 | Gonda | Jul 2002 | A1 |
20020161867 | Cochran et al. | Oct 2002 | A1 |
20020194369 | Rawlins et al. | Dec 2002 | A1 |
20030041170 | Suzuki | Feb 2003 | A1 |
20030058850 | Rangarajan et al. | Mar 2003 | A1 |
20030069972 | Yoshimura et al. | Apr 2003 | A1 |
20030093481 | Mitchell et al. | May 2003 | A1 |
20030204768 | Fee | Oct 2003 | A1 |
20030233385 | Srinivasa et al. | Dec 2003 | A1 |
20040044773 | Bayus et al. | Mar 2004 | A1 |
20040047286 | Larsen et al. | Mar 2004 | A1 |
20040054680 | Kelley et al. | Mar 2004 | A1 |
20040073659 | Rajsic et al. | Apr 2004 | A1 |
20040098505 | Clemmensen | May 2004 | A1 |
20040101274 | Foisy et al. | May 2004 | A1 |
20040267866 | Carollo et al. | Dec 2004 | A1 |
20040267897 | Hill et al. | Dec 2004 | A1 |
20050018669 | Arndt et al. | Jan 2005 | A1 |
20050021683 | Newton et al. | Jan 2005 | A1 |
20050027881 | Figueira et al. | Feb 2005 | A1 |
20050038834 | Souder et al. | Feb 2005 | A1 |
20050083953 | May | Apr 2005 | A1 |
20050120160 | Plouffe et al. | Jun 2005 | A1 |
20050132044 | Guingo et al. | Jun 2005 | A1 |
20050147095 | Guerrero et al. | Jul 2005 | A1 |
20050220096 | Friskney et al. | Oct 2005 | A1 |
20050228952 | Mayhew et al. | Oct 2005 | A1 |
20060002370 | Rabie et al. | Jan 2006 | A1 |
20060018253 | Windisch et al. | Jan 2006 | A1 |
20060026225 | Canali et al. | Feb 2006 | A1 |
20060028999 | Iakobashvilli et al. | Feb 2006 | A1 |
20060092940 | Ansari et al. | May 2006 | A1 |
20060092976 | Lakshman et al. | May 2006 | A1 |
20060174087 | Hashimoto et al. | Aug 2006 | A1 |
20060182033 | Chen et al. | Aug 2006 | A1 |
20060182037 | Chen et al. | Aug 2006 | A1 |
20060184937 | Abels et al. | Aug 2006 | A1 |
20060193266 | Siddha et al. | Aug 2006 | A1 |
20060221961 | Basso et al. | Oct 2006 | A1 |
20070005627 | Dodge | Jan 2007 | A1 |
20070043860 | Pabari | Feb 2007 | A1 |
20070156919 | Potti et al. | Jul 2007 | A1 |
20070220358 | Goodill et al. | Sep 2007 | A1 |
20070239987 | Hoole et al. | Oct 2007 | A1 |
20070260721 | Bose et al. | Nov 2007 | A1 |
20070286185 | Eriksson et al. | Dec 2007 | A1 |
20070297428 | Bose et al. | Dec 2007 | A1 |
20080002579 | Lindholm et al. | Jan 2008 | A1 |
20080002683 | Droux et al. | Jan 2008 | A1 |
20080034249 | Husain et al. | Feb 2008 | A1 |
20080040467 | Mendiratta et al. | Feb 2008 | A1 |
20080049614 | Briscoe et al. | Feb 2008 | A1 |
20080049621 | McGuire et al. | Feb 2008 | A1 |
20080059556 | Greenspan et al. | Mar 2008 | A1 |
20080071900 | Hecker et al. | Mar 2008 | A1 |
20080086726 | Griffith et al. | Apr 2008 | A1 |
20080133687 | Fok | Jun 2008 | A1 |
20080159301 | de Heer | Jul 2008 | A1 |
20080163207 | Reumann et al. | Jul 2008 | A1 |
20080165704 | Marchetti et al. | Jul 2008 | A1 |
20080189769 | Casado et al. | Aug 2008 | A1 |
20080212963 | Fortin et al. | Sep 2008 | A1 |
20080225853 | Melman et al. | Sep 2008 | A1 |
20080240122 | Richardson et al. | Oct 2008 | A1 |
20080291910 | Tadimeti et al. | Nov 2008 | A1 |
20080301303 | Matsuoka | Dec 2008 | A1 |
20090031041 | Clemmensen | Jan 2009 | A1 |
20090043823 | Iftode et al. | Feb 2009 | A1 |
20090070501 | Kobayashi et al. | Mar 2009 | A1 |
20090083445 | Ganga | Mar 2009 | A1 |
20090113031 | Ruan et al. | Apr 2009 | A1 |
20090122710 | Bar-Tor et al. | May 2009 | A1 |
20090150521 | Tripathi | Jun 2009 | A1 |
20090150527 | Tripathi et al. | Jun 2009 | A1 |
20090161547 | Riddle et al. | Jun 2009 | A1 |
20090245793 | Chiang | Oct 2009 | A1 |
20090249470 | Litvin et al. | Oct 2009 | A1 |
20090249473 | Cohn | Oct 2009 | A1 |
20090276661 | Deguchi et al. | Nov 2009 | A1 |
20090279536 | Unbehagen et al. | Nov 2009 | A1 |
20090279549 | Ramanathan et al. | Nov 2009 | A1 |
20090292858 | Lambeth et al. | Nov 2009 | A1 |
20090300210 | Ferris | Dec 2009 | A1 |
20090303880 | Maltz et al. | Dec 2009 | A1 |
20100002722 | Porat et al. | Jan 2010 | A1 |
20100046530 | Hautakorpi et al. | Feb 2010 | A1 |
20100046531 | Louati et al. | Feb 2010 | A1 |
20100058106 | Srinivasan et al. | Mar 2010 | A1 |
20100061231 | Harmatos et al. | Mar 2010 | A1 |
20100115101 | Lain et al. | May 2010 | A1 |
20100125667 | Soundararajan | May 2010 | A1 |
20100131636 | Suri et al. | May 2010 | A1 |
20100153554 | Anschutz et al. | Jun 2010 | A1 |
20100162036 | Linden et al. | Jun 2010 | A1 |
20100165877 | Shukla et al. | Jul 2010 | A1 |
20100169467 | Shukla et al. | Jul 2010 | A1 |
20100191848 | Fujita et al. | Jul 2010 | A1 |
20100205479 | Akutsu et al. | Aug 2010 | A1 |
20100214949 | Smith et al. | Aug 2010 | A1 |
20100257263 | Casado et al. | Oct 2010 | A1 |
20100275199 | Smith et al. | Oct 2010 | A1 |
20100290485 | Martini et al. | Nov 2010 | A1 |
20100322255 | Hao et al. | Dec 2010 | A1 |
20110016215 | Wang | Jan 2011 | A1 |
20110026521 | Gamage et al. | Feb 2011 | A1 |
20110032830 | Van Der Merwe et al. | Feb 2011 | A1 |
20110075664 | Lambeth et al. | Mar 2011 | A1 |
20110075674 | Li et al. | Mar 2011 | A1 |
20110085557 | Gnanasekaram et al. | Apr 2011 | A1 |
20110085559 | Chung et al. | Apr 2011 | A1 |
20110103259 | Aybay et al. | May 2011 | A1 |
20110119748 | Edwards et al. | May 2011 | A1 |
20110134931 | Merwe et al. | Jun 2011 | A1 |
20110142053 | Van Der Merwe et al. | Jun 2011 | A1 |
20110173490 | Narayanaswamy et al. | Jul 2011 | A1 |
20110261825 | Ichino | Oct 2011 | A1 |
20110273988 | Tourrilhes et al. | Nov 2011 | A1 |
20110296052 | Guo et al. | Dec 2011 | A1 |
20110299534 | Koganti et al. | Dec 2011 | A1 |
20110299537 | Saraiya et al. | Dec 2011 | A1 |
20110305167 | Koide | Dec 2011 | A1 |
20110310899 | Alkhatib et al. | Dec 2011 | A1 |
20110317559 | Kern et al. | Dec 2011 | A1 |
20110317701 | Yamato et al. | Dec 2011 | A1 |
20120014386 | Xiong et al. | Jan 2012 | A1 |
20120147894 | Mulligan et al. | Jun 2012 | A1 |
20120151550 | Zhang | Jun 2012 | A1 |
20120158942 | Kalusivalingam et al. | Jun 2012 | A1 |
20120185553 | Nelson | Jul 2012 | A1 |
20120236734 | Sampath et al. | Sep 2012 | A1 |
20120239790 | Doane et al. | Sep 2012 | A1 |
20130024579 | Zhang et al. | Jan 2013 | A1 |
20130044636 | Koponen et al. | Feb 2013 | A1 |
20130044752 | Koponen et al. | Feb 2013 | A1 |
20130054761 | Kempf et al. | Feb 2013 | A1 |
20130058346 | Sridharan et al. | Mar 2013 | A1 |
20130060940 | Koponen et al. | Mar 2013 | A1 |
20130103817 | Koponen et al. | Apr 2013 | A1 |
20130103818 | Koponen et al. | Apr 2013 | A1 |
20130114466 | Koponen et al. | May 2013 | A1 |
20130117428 | Koponen et al. | May 2013 | A1 |
20130117429 | Koponen et al. | May 2013 | A1 |
20130125230 | Koponen et al. | May 2013 | A1 |
20130163427 | Beliveau et al. | Jun 2013 | A1 |
20130163475 | Beliveau et al. | Jun 2013 | A1 |
20130208623 | Koponen et al. | Aug 2013 | A1 |
20130211549 | Thakkar et al. | Aug 2013 | A1 |
20130212148 | Koponen et al. | Aug 2013 | A1 |
20130212235 | Fulton et al. | Aug 2013 | A1 |
20130212243 | Thakkar et al. | Aug 2013 | A1 |
20130212244 | Koponen et al. | Aug 2013 | A1 |
20130212245 | Koponen et al. | Aug 2013 | A1 |
20130212246 | Koponen et al. | Aug 2013 | A1 |
20130219037 | Thakkar et al. | Aug 2013 | A1 |
20130219078 | Padmanabhan et al. | Aug 2013 | A1 |
20130227097 | Yasuda et al. | Aug 2013 | A1 |
20130332602 | Nakil et al. | Dec 2013 | A1 |
20130332619 | Xie et al. | Dec 2013 | A1 |
20140016501 | Kamath et al. | Jan 2014 | A1 |
20140019639 | Ueno | Jan 2014 | A1 |
20140040466 | Yang | Feb 2014 | A1 |
20140189212 | Slaight et al. | Jul 2014 | A1 |
20140247753 | Koponen et al. | Sep 2014 | A1 |
20140348161 | Koponen et al. | Nov 2014 | A1 |
20140351432 | Koponen et al. | Nov 2014 | A1 |
20150009804 | Koponen | Jan 2015 | A1 |
20150341205 | Invernizzi | Nov 2015 | A1 |
20160119224 | Ramachandran et al. | Apr 2016 | A1 |
20160197774 | Koponen et al. | Jul 2016 | A1 |
Number | Date | Country |
---|---|---|
0737921 | Oct 1996 | EP |
1443423 | Aug 2004 | EP |
2838244 | Feb 2015 | EP |
2485866 | May 2012 | GB |
2009001845 | Dec 2008 | WO |
2011080870 | Jul 2011 | WO |
Entry |
---|
International Search Report and Written Opinion for PCT/US2017/013820, dated May 4, 2017, Nicira, Inc., International Search Report and Written Opinion of commonly owned, counterpart International Patent Application PCT/US2017/013820. |
Kent, William, “A Simple Guide to Five Normal Forms in Relational Database Theory,” Communications of the ACM, Feb. 1, 1983, 6 pages, vol. 26, No. 2, Association for Computing Machinery, Inc., USA. |
Ciavaglia, Laurent, et al., “An Architectural Reference Model for Autonomic Networking, Cognitive Networking and Self-Management,” Mar. 2012, 179 pages, Draft ETSI GS AFI 002 V0.0.17, European Telecommunications Standards Institute (ETSI). |
Reitblatt, Mark, et al., “Consistent Updates for Software-Defined Networks: Change You Can Believe in!” Proceedings of the 10th ACM Workshop on Hot Topics in Networks, Nov. 14-15, 2011, 6 pages, ACM, Cambridge, MA. |
Wang, Wei-Ming, et al., “Analysis and Implementation of an Open Programmable Router Based on Forwarding and Control Element Separation,” Journal of Computer Science and Technology, Sep. 2008, 11 pages, vol. 23 Issue 5, Springer International Publishing AG. |
Adya, Atul, et al., “Cooperative Task Management without Manual Stack Management,” Proceedings of the 2002 USENIX Annual Technical Conference, Jun. 2002, 14 pages, Monterey, CA, USA. |
Caesar, Matthew, et al., “Design and Implementation of a Routing Control Platform,” NSDI '05: 2nd Symposium on Networked Systems Design & Implementation, Apr. 2005, 14 pages, USENIX Association. |
Cai, Zheng, et al., “The Preliminary Design and Implementation of the Maestro Network Control Platform,” Oct. 1, 2008, 17 pages, NSF. |
Casado, Martin, et al. “Ethane: Taking Control of the Enterprise,” SIGCOMM'07, Aug. 27-31, 2007, 12 pages, ACM, Kyoto, Japan. |
Enns, R., “NETCONF Configuration Protocol,” Dec. 2006, 96 pages, IETF Trust (RFC 4741). |
Greenberg, Albert, et al., “A Clean Slate 4D Approach to Network Control and Management,” ACM SIGCOMM Computer Communication Review, Oct. 2005, 12 pages, vol. 35, No. 5. |
Greenberg, Albert, et al., “VL2: A Scalable and Flexible Data Center Network,” SIGCOMM'09, Aug. 17-21, 2009, 12 pages, ACM, Barcelona, Spain. |
Gude, Natasha, et al., “NOX: Towards an Operating System for Networks,” ACM SIGCOMM Computer Communication Review, Jul. 2008, 6 pages, vol. 38, No. 3. |
Hinrichs, Timothy L., et al., “Practical Declarative Network Management,” WREN'09, Aug. 21, 2009, 10 pages, Barcelona, Spain. |
Koponen, Teemu, et al., “Onix: A Distributed Control Platform for Large-scale Production Networks,” In Proc. OSDI, Oct. 2010, 14 pages. |
Krishnaswamy, Umesh, et al., “ONOS Open Network Operating System—An Experimental Open-Source Distributed SDN OS,” Apr. 16, 2013, 24 pages. |
Pankaj, Berde, et al., “ONOS Open Network Operating System—An Experimental Open-Source Distributed SDN OS,” Dec. 19, 2013, 4 pages. |
Schneider, Fred B., “Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial,” ACM Computing Surveys, Dec. 1990, 21 pages, vol. 22, No. 4, ACM. |
Terry, Douglas B., et al., “Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System,” SIGOPS '95, Dec. 1995, 12 pages, ACM, Colorado, USA. |
Number | Date | Country | |
---|---|---|---|
20160294680 A1 | Oct 2016 | US |
Number | Date | Country | |
---|---|---|---|
62143706 | Apr 2015 | US |