Management of update queues for network controller

Information

  • Patent Grant
  • 11601521
  • Patent Number
    11,601,521
  • Date Filed
    Wednesday, May 5, 2021
    3 years ago
  • Date Issued
    Tuesday, March 7, 2023
    a year ago
Abstract
Some embodiments provide a method for a network controller that manages multiple managed forwarding elements (MFEs) that implement multiple logical networks. The method stores (i) a first data structure including an entry for each logical entity in a desired state of the multiple logical networks and (ii) a second data structure including an entry for each logical entity referred to by an update for at least one MFE. Upon receiving updates specifying modifications to the logical entities, the method adds separate updates to separate queues for the MFEs that require the update. The separate updates reference the logical entity entries in the second data structure. When the second data structure reaches a threshold size in comparison to the first data structure, the method compacts the updates in at least one of the queues so that each queue has no more than one update referencing a particular logical entity entry.
Description
BACKGROUND

Software-defined networking (SDN) often uses network controllers to configure virtual (logical) networks throughout a datacenter. As SDN becomes more prevalent and datacenters cater to more and more tenants, controllers are expected to perform more operations. Key to this architecture is that the controllers do not become bottlenecks in the configuration process, and that these controllers be able to handle when other elements downstream in the configuration process are bottlenecked (i.e., making sure that if one switch is a bottleneck this does not slow the configuration of other switches). As such, techniques to improve the use of processing resources by network controllers are needed.


BRIEF SUMMARY

Some embodiments provide a method for managing update queues at a network controller, that maintains update queues for each managed forwarding element (MFE) of a set of MFEs that the network controller manages. The network controller of some embodiments receives updates to distribute to one or more MFEs, identifies which of the MFEs that it manages require the update, and adds updates to separate queues for each identified MFE. These updates are distributed from the queues to the MFEs (or to local controllers operating alongside the MFEs to manage the MFEs directly). In order to reduce the load on the network controller, in some embodiments the separate updates added to multiple queues for the same received update all refer to a shared entry or entries (e.g., shared objects) stored by the network controller to represent a network entity created, modified, or removed by the update. In addition, some embodiments compact the updates within the separate queues at various times in order to reduce (i) the number of updates in at least some of the queues and (ii) the number of entries stored by the network controller.


The network controller of some embodiments receives updates as changes to the desired state of one or more entities of a logical network implemented by at least a subset of the MFEs managed by the controller. The physical network (e.g., a datacenter, combination of multiple datacenters, etc.) that contains the MFEs may implement multiple logical networks, each of which includes multiple logical entities. These logical entities include, in some embodiments, logical forwarding elements (e.g., logical routers, logical switches) and logical ports of such logical forwarding elements. The updates are received from a management plane application (e.g., running on a separate controller computer) based on, e.g., user input to change the configuration of a logical network. The network controller of some embodiments is responsible for distributing these updates to its set of MFEs (the network controller may be part of a cluster of network controller that each manage different sets of MFEs). The network controller receives the update to the desired state and, based at least in part on the receipt of information received from the MFEs (referred to as runtime state), generates translated state updates for the MFEs. These translated state updates are placed in the distribution (or publication) queues for the MFEs. As noted above, in some embodiments, rather than directly communicating with the MFEs, the network controller publishes the translated state updates to the local controllers that operate alongside the MFEs (e.g., in the same host machines as the MFEs).


As mentioned, the network controller of some embodiments uses shared entries representing the logical entities to which corresponding updates in multiple queues reference. In some embodiments, the controller stores two data structures with entries (e.g., objects) for logical entities. The controller stores (i) a first data structure with an entry for each logical entity in the desired state of the logical networks and (ii) a second data structure with an entry for each logical entity referenced by an update for at least one MFE (possibly including both updates currently in the queues and updates that have been distributed from the queues to the MFEs).


The first data structure represents the current desired state of all of the logical networks, and thus has an entry for each logical switch, logical router, logical switch port, logical router port, etc. of each logical network. In addition, some embodiments create entries (e.g., additional objects) for each property of such a logical entity. Thus, a logical switch port might have a primary object as well as related objects for some or all of its properties that can be modified.


The second data structure, in some embodiments, includes a corresponding entry for each entry in the first data structure (and thus includes entries for all logical entities part of the current desired state of the logical networks) as well as entries for other logical entities that may be referenced by the updates in the queues. When a first update specifies the creation of a logical entity and a later update specifies the deletion of that logical entity, the corresponding entry or entries will be removed from the first data structure. However, because the updates in the queues refer to shared entries, the entry or entries for the logical entity are not removed from the second data structure, to which the updates in the queues refer.


These updates, in some embodiments, are structured as references (e.g., pointers) along with metadata specifying the modification(s) to be made to the logical entity corresponding to the referenced entry. For instance, an update to remove a logical switch would have a reference to the entry for the logical switch along with metadata specifying to delete the object. An update to change the name of the logical switch would have a reference to the same entry (or a related entry for the logical switch name) with metadata specifying the new value for the name property. Having numerous updates in separate queues reference the same entry reduces the memory used by the network controller, as the entries (objects) typically occupy much more memory than the references (pointers) to the objects. This memory load may be a problem if one or more of the queues backs up significantly. Furthermore, multiple separate updates in a queue that modify the same logical entity will also refer to the same shared entry, thereby further saving space.


The updates may be distributed from different queues at different speeds. This may occur due to a MFE (or the local controller for a MFE) operating slowly, connectivity between the network controller and a MFE being cut off for a period of time, different numbers of updates being distributed to different MFEs, etc. Thus, while an update to delete a particular logical port may have been distributed to twenty MFEs, the update could still remain in the queue of a twenty-first MFE, and thus the entry referenced by these twenty-one updates needs to remain in the second data structure.


While using shared objects in the second data structure does reduce the overall memory load on the network controller, both the second data structure and any very slow update queues could grow unendingly large without a procedure in place to limit these structures. Thus, some embodiments use a queue compaction procedure that guarantees an upper bound on the overall memory occupied by the combination of the first data structure, the second data structure, and all of the individual update queues (for a bounded number of desired state logical entities). That is, the overall memory load is guaranteed to be a bounded function of the memory load of the first data structure (which is bounded according to the number of logical entities in the desired state at any given time).


The queue compaction procedure of some embodiments both compacts updates within the queues and removes unnecessary entries from the second data structure. While the compaction procedure is performed continuously in some embodiments, in order to save processing resources other embodiments perform the compaction process whenever the second data structure reaches a specified size. For example, some embodiments perform the compaction process whenever the second data structure reaches a threshold size relative to the first data structure, such as whenever the second data structure has twice as many entries as the first data structure.


To compact updates within the queues, the compaction procedure identifies sets of updates within a queue that reference the same entry and combines these into a single update. That is, rather than two (or more) pointers with separate sets of metadata specifying different modifications to the referenced object, the updates are combined into a single pointer with the metadata combined into a larger set of metadata. In certain cases, the update can be removed from the queue altogether. Specifically, when a first update referencing an entry specifies to create the corresponding logical entity and a last update referencing the same entry specifies to delete the corresponding logical entity, these (and any intervening updates referencing the same entry) updates can be removed from the queue. Similarly, any other pair of updates that specifically negate each other (e.g., a first update adding X to a value of a property and a second update subtracting X from the value of the same property) are removed in some embodiments. On the other hand, a first update adding X to a value of a property and a second update adding Y to the value of the property would be combined into a single update specifying to add X and then add Y to the value of the property.


The compaction process additionally, as noted, removes unnecessary entries from the second data structure, thereby limiting the amount of memory occupied by the second data structure. A particular entry may be removed from the second data structure so long as (i) the particular entry does not have a corresponding entry in the first data structure (i.e., the corresponding logical entity is not part of the current desired network state) and (ii) no updates remain in any of the queues that reference the particular entry. In some embodiments, the network controller removes entries from the second data structure whenever these conditions are met, not only as part of the compaction process. However, the compaction process may result in the removal of updates such that the second condition is met for additional entries.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description and the Drawings is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing, but rather are to be defined by the appended claims, because the claimed subject matters can be embodied in other specific forms without departing from the spirit of the subject matters.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth in the appended claims. However, for purpose of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 conceptually illustrates a network control system including a management plane, a central controller, and multiple local controllers.



FIG. 2 conceptually illustrates the architecture of a central network controller of some embodiments.



FIG. 3 conceptually illustrates a process for receiving desired state updates and generating translated state updates to place in the local controller queues.



FIGS. 4 and 5 conceptually illustrate examples of a network controller receiving updates and managing its desired state image, registry, and local controller queue data structures according to some embodiments.



FIG. 6 conceptually illustrates a process of some embodiments for performing queue and registry compaction to manage the memory load on a network controller.



FIG. 7 conceptually illustrates a network controller performing compaction.



FIGS. 8-10 provide examples of queue compaction, some of which result in the removal of updates.



FIG. 11 conceptually illustrates a controller as updates are compacted and objects are then deleted from the registry.



FIG. 12 conceptually illustrates an electronic system with which some embodiments of the invention are implemented.





DETAILED DESCRIPTION

In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.


Some embodiments provide a method for managing update queues at a network controller, that maintains update queues for each managed forwarding element (MFE) of a set of MFEs that the network controller manages. The network controller of some embodiments receives updates to distribute to one or more MFEs, identifies which of the MFEs that it manages require the update, and adds updates to separate queues for each identified MFE. These updates are distributed from the queues to the MFEs (or to local controllers operating alongside the MFEs to manage the MFEs directly). In order to reduce the load on the network controller, in some embodiments the separate updates added to multiple queues for the same received update all refer to a shared entry or entries (e.g., shared objects) stored by the network controller to represent a network entity created, modified, or removed by the update. In addition, some embodiments compact the updates within the separate queues at various times in order to reduce (i) the number of updates in at least some of the queues and (ii) the number of entries stored by the network controller.


The network controller of some embodiments receives updates as changes to the desired state of one or more entities of a logical network implemented by at least a subset of the MFEs managed by the controller. The physical network (e.g., a datacenter, combination of multiple datacenters, etc.) that contains the MFEs may implement multiple logical networks, each of which includes multiple logical entities. These logical entities include, in some embodiments, logical forwarding elements (e.g., logical routers, logical switches) and logical ports of such logical forwarding elements.



FIG. 1 conceptually illustrates a network control system 100 including such a network controller 110. As shown, the network control system 100 includes a management plane 105, a central controller 110, and multiple local controllers (also called the local control plane) 115-125 that operate on host machines 130-140. In addition, each of the host machines 130-140 includes a managed forwarding element (MFE) 145-155 that processes data traffic according to configuration information received from their respective controllers.


Though shown as a single entity, it should be understood that the management plane 105 may be implemented as a distributed system. Similarly, the central controller 110 may be part of a cluster of such central controllers that receive configuration data from the management plane 105. That is, the management plane 105 may include multiple computing devices that implement management plane functions, and a central control plane may include multiple central controllers (including the controller 110) that implement central control plane functions. In some embodiments, each centralized controller computer includes both management plane and central control plane functions (e.g., as separate applications on the computer).


The management plane 105 of some embodiments is responsible for receiving logical network configuration inputs 165 (e.g., through an application programming interface). Users (e.g., network administrators) may input logical network configuration data through, e.g., a command-line interface, a graphical user interface, etc. The configuration for each logical network configuration, in some embodiments, may include data defining one or more logical forwarding elements, such as logical switches, logical routers, etc. This configuration data may include information describing the logical ports (e.g., assigning MAC and/or IP addresses to logical ports) for these logical forwarding elements, how the logical forwarding elements interconnect, various service rules (such as distributed firewall rules), etc.


The management plane 105 receives the logical network configuration input 170 and generates desired state data that specifies how the logical network should be implemented in the physical infrastructure. In some embodiments, this data includes description of the logical forwarding elements and logical ports in a uniform format (e.g., as a set of database records or another format). When users provide configuration changes (e.g., creating or deleting logical entities, modifying properties of logical entities, etc.), the changes to the desired state are distributed as logical network updates 170 to the central controller 110 (or controllers).


The central controller 110 receives these updates 170 from the management plane, and is responsible for distributing the updates to the MFEs 145-155 that it manages (e.g., via the local controllers 115-125). In some embodiments, the network controller 110 is part of a central control plane cluster, with each controller in the cluster managing a different set of MFEs. The network controller receives the update 170 to the desired state and, based at least in part on the receipt of information received from the local controllers for its MFEs (referred to as runtime state), generates translated state updates for the local controllers 115-125. These translated state updates are placed in the distribution (or publication) queues for the local controllers. As explained further below, the separate queues allow for updates to be published at different rates to different local controllers, which may (for various reasons) process the updates at different speeds. In many cases, not all local controllers will need to receive a particular update. If a MFE does not implement the logical entity to which the update relates, then the central controller will not generate an update for the corresponding local controller's queue. In the example, the central controller 110 only generates and publishes the information in the update 170 to the local controllers 115 and 125 (via updates 175 and 180). FIG. 2, described below, conceptually illustrates the architecture of a central controller of some embodiments.


The local controllers 115-125 are responsible for translating the received updates into configuration data formatted for their respective MFEs 145-155. In some embodiments, the local controller is a daemon that operates in the virtualization software of the host machine, as does the MFE. In other embodiments, the local controller and MFE may operate within a VM that hosts multiple containers for one or more logical networks. In some such embodiments, a first local controller and MFE operate in the virtualization software on the host machine while a second local controller and MFE operate in the container host VM (or multiple such controllers/MFE operate in multiple container host VMs).


In addition, while in some embodiments all MFEs in the physical infrastructure are of the same type (and thus require data in the same format), in other embodiments the physical infrastructure may include multiple different types of MFEs. For instance, some embodiments include both hosts with kernel virtual machine (KVM) virtualization software with a flow-based MFE (e.g., Open vSwitch) and hosts with ESX virtualization software with a feature-base MFE. Such different types of MFEs require different data formats from the local controller. As such, the local controllers 115-125 of some embodiments are configured to translate the received updates into the specific format required by their MFEs.


As mentioned, FIG. 2 conceptually illustrates the architecture of a central network controller 200 of some embodiments, such as the network controller 110 of FIG. 1. The network controller 200 includes a management plane interface 205, an update analyzer 210, a queue manager 215, a publisher 220, a compactor 225, and a local control plane interface 230.


In addition, the network controller stores (e.g., in volatile memory, such as RAM) a desired state image 235, a registry 240, and a set of queues 245. As mentioned, the network controller of some embodiments uses shared entries representing the logical entities to which corresponding updates in multiple queues reference. In some embodiments, the controller stores two data structures with entries (e.g., objects) for logical entities. The controller stores (i) a desired state image 235 with an entry for each logical entity in the desired state of the logical networks and (ii) a registry 240 with an entry for each logical entity referenced by an update for at least one MFE (possibly including both updates currently in the queues 245 and updates that have been distributed from the queues 245 to the local controllers).


The desired state image 235 represents the current desired state of all of the logical networks, and thus has an entry for each logical switch, logical router, logical switch port, logical router port, etc. of each logical network. In addition, some embodiments create entries (e.g., additional objects) for each property of such a logical entity. Thus, a logical switch port might have a primary object as well as related objects for some or all of its properties that can be modified by configuration updates.


The registry 240, in some embodiments, includes a corresponding entry for each entry in the desired state image (and thus includes entries for all logical entities part of the current desired state of the logical networks) as well as entries for other logical entities that may be referenced by the updates in the queues 245. When a first update specifies the creation of a logical entity and a later update specifies the deletion of that logical entity, the corresponding entry or entries will be removed from the desired state image 235. However, because the updates in the queues 245 refer to shared entries in the registry 240, the entry or entries for the logical entity are not removed from the registry 245.


In addition to the desired state image 235 and registry 240, the network controller 200 stores the queues 245, with one queue for each MFE managed by the network controller. Each of these queues stores a sequence of updates, which the network controller publishes from the queue to the corresponding local controller. These updates, in some embodiments, are structured as references (e.g., pointers) along with metadata specifying the modification(s) to be made to the logical entity corresponding to the referenced entry. For instance, an update to remove a logical switch would have a reference to the entry for the logical switch along with metadata specifying to delete the corresponding logical entity. An update to change the name of the logical switch would have a reference to the same entry (or a related entry for the logical switch name) with metadata specifying the new value for the name property. Having numerous updates in separate queues reference the same entry reduces the memory used by the network controller, as the entries (objects) typically occupy much more memory than the references (pointers) to the objects. This memory load may be a problem if one or more of the queues backs up significantly. Furthermore, multiple separate updates in a queue that modify the same logical entity will also refer to the same shared entry, thereby further saving space.


The management plane interface 205 handles interactions with the management plane (which may be operating, e.g., as a separate application on the same physical machine as the central controller and/or on one or more distinct physical machines). The central controller receives changes in the desired state of one or more logical networks through this management plane interface 205.


The update analyzer 210 receives updates to the desired state and determines whether to add or remove objects from the desired state image 235 and/or registry 240. For instance, when an update specifies to create a new logical entity, the update analyzer 210 creates one or more objects in the desired state image 235 and the registry 240 for the logical entity. Some embodiments create a single object in each of the two data structures, while other embodiments create additional objects to represent the various properties of the logical entity. On the other hand, when an update specifies to delete an existing logical entity, the update analyzer 210 removes the corresponding object or objects from the desired state image 235. However, until the update deleting the logical entity has been published to all of the requisite local controllers, the corresponding object or objects are left in the registry 240.


The queue manager 215 of some embodiments generates the translated state updates for the queues 245 based on the desired state updates received from the management plane as well as runtime state information received from the local controller. The runtime state may identify on which MFEs different logical entities are realized as well as other information indicating the realization of the desired state in the physical infrastructure. The queue manager 215 is responsible for identifying into which queue the updates should be placed based on data stored by the network controller identifying the span for a given logical entity (i.e., the MFEs that need configuration data for a particular logical entity). The queue manager 215 generates a reference (e.g., a pointer) to the appropriate object in the registry 240 for each update placed in a separate queue, and also provides the metadata along with the reference that identifies the modifications to make to the logical entity (e.g., create, delete, change value of property, etc.).


The publisher 220 is responsible for distributing data from the queues through the local control plane interface 230 to the appropriate local controllers. In some embodiments, the central controller 200 has a separate channel with each of the local controllers that it manages via the interface 230. When the central controller receives indication through this communication channel that the local controller has processed an update, the publisher 220 pushes the next update from the corresponding queue to the local controller through the local controller interface 230.


The updates may be distributed from different queues 245 at different speeds. This may occur due to a local controller operating slowly, connectivity between the network controller 200 and a local controller being cut off for a period of time, different numbers of updates being distributed to different local controllers, etc. Thus, while an update to delete a particular logical port may have been distributed to twenty local controllers, the update could still remain in the queue of a twenty-first local controller, and thus the object referenced by these twenty-one updates needs to remain in the registry.


While using shared objects in the second data structure does reduce the overall memory load on the network controller, both the second data structure and any very slow update queues could grow unendingly large without a procedure in place to limit these structures. Thus, the compactor 225 of some embodiments performs a queue compaction procedure that guarantees an upper bound on the overall memory occupied by the combination of the first data structure, the second data structure, and all of the individual update queues (for a bounded number of desired state logical entities). That is, the overall memory load is guaranteed to be a bounded function of the memory load of the first data structure (which is bounded according to the number of logical entities in the desired state at any given time).


The compactor 225 of some embodiments both compacts updates within the queues and removes unnecessary entries from the registry 240. While the compaction procedure is performed continuously in some embodiments, in order to save processing resources other embodiments perform the compaction process whenever the registry 240 reaches a specified size. For example, some embodiments perform the compaction process whenever the registry 240 reaches a threshold size relative to the desired state image 235, such as whenever the registry 240 has twice as many entries as the desired state image 235.


To compact updates within the queues, the compactor 225 identifies sets of updates within a queue that reference the same entry and combines these into a single update. That is, rather than two (or more) pointers with separate sets of metadata specifying different modifications to the referenced object, the updates are combined into a single pointer with the metadata combined into a larger set of metadata. In certain cases, the update can be removed from the queue altogether. Specifically, when a first update referencing an object specifies to create the corresponding logical entity and a last update referencing the same object specifies to delete the corresponding logical entity, these updates (and any intervening updates referencing the same object) can be removed from the queue. Similarly, any other pair of updates that specifically negate each other (e.g., a first update adding X to a value of a property and a second update subtracting X from the value of the same property) are removed in some embodiments. On the other hand, a first update adding X to a value of a property and a second update adding Y to the value of the property would be combined into a single update specifying to add X and then add Y to the value of the property.


The compactor 225 additionally, as noted, removes unnecessary objects from the registry, thereby limiting the amount of memory occupied by the registry. A particular object may be removed from the registry 240 so long as (i) the particular entry does not have a corresponding object in the desired state image 235 (i.e., the corresponding logical entity is not part of the current desired network state) and (ii) no updates remain in any of the queues 245 that reference the particular object. In some embodiments, the compactor 225 removes objects from the registry 240 whenever these conditions are met, not only as part of the compaction process. However, the compaction process may result in the removal of updates such that the second condition is met for additional objects.


The above introduces the generation of flow entries for implementing service rules. In the following, Section I describes adding updates to the various local controller queues, while Section II describes the compaction process of some embodiments. Section III then describes the electronic system with which some embodiments of the invention are implemented.


I. Adding Updates to Queues


The network controller of some embodiments receives updates to the desired state of one or more logical networks implemented by the managed physical infrastructure and, based at least in part on the receipt of runtime state from the local controllers, generates translated state updates to distribute to the local controllers. The controller adds these translated state updates to the distribution (or publication) queues for the local controllers. As mentioned, in some embodiments the network controller uses shared objects representing the logical entities to which corresponding updates in multiple queues reference. In some embodiments, the controller stores two data structures with objects for the logical entities. Specifically, the controller stores (i) a desired state image with an object for each logical entity in the desired state of the logical networks and (ii) a registry with an object for each logical entity referenced by at least one update (including both updates currently in the queues and updates that have been distributed from the queues to the local controllers).



FIG. 3 conceptually illustrates a process 300 for receiving desired state updates and generating translated state updates to place in the local controller queues. The process 300, in some embodiments, is performed by a centralized network controller that manages numerous MFEs operating on numerous host machines. Each host machine includes one or more MFEs (e.g., software switches and/or routers) that are each configured to implement one or more logical networks based on the translated state updates. The MFE or MFEs on a host machine are managed by a local controller, also operating on the host machine. In some embodiments, both the MFE(s) and the local controller operate within the virtualization software of the host machine.


The centralized network controller may be one of several centralized controllers operating in a cluster, with each of the centralized controllers receiving desired state updates and performing the process 300 (or a similar process). Each centralized controller, in some embodiments, manages a different set of MFEs (that is, each MFE is assigned to one centralized controller, which provides the updates for the MFE to its local controller).


As shown, the process 300 begins by receiving (at 305) an update to a logical entity from the management plane. As described above, the management plane generates the desired state updates based on, e.g., configuration input from a network administrator to modify the logical network configuration. These updates may add or remove logical forwarding elements (e.g., logical switches, logical routers) or logical ports, modify properties of these logical entities. Modifying properties of a logical port or logical forwarding element could involve changing the name of that logical entity, changing a value of some other property, adding or removing service rules (e.g., distributed firewall rules) that relate to the logical entity, etc.


A single desired state update received from the management plane might include updates to numerous logical entities. For instance, an administrator might create a new logical switch with numerous logical ports, each of which would have to be created. However, the process 300 relates to a single logical entity. In some embodiments, the network controller performs this process (or a similar process) for each logical entity that is updated by the changes to the desired state.


The process 300 determines (at 310) whether an object exists in the desired state image for the logical entity modified by the update received at 305. If an object exists in the desired state image, then a corresponding object will also exist in the registry (though the converse is not necessarily true). Similarly, if no object exists in the desired state image for a logical entity, but an update is received pertaining to that logical entity, then (i) the update should be to create the logical entity and (ii) the registry should also not have an object for the logical entity. While the registry will store objects that no longer have corresponding objects in the desired state image, these should be objects that have already been deleted according to the desired state.


Thus, when the desired state image does not store an object for the logical entity to which the update pertains, the process creates (at 315) an object in the desired state image and an object in the registry pertaining to the logical entity. The desired state image allows the network controller to keep track of the desired state of the logical networks implemented within the physical infrastructure, and thus objects are created for each logical entity with the properties of that logical entity. The registry of some embodiments stores the objects to which state updates actually refer, and thus the process creates an object in the registry for each object created in the desired state image.


When the desired state image already stores an object for the logical entity (i.e., the updated does not create a new logical entity), the process determines (at 320) whether the received update deletes the logical entity. If the update deletes the logical entity, then the process removes (at 325) the object corresponding to the logical entity from the desired state image. However, the network controller does not remove the corresponding object from the registry, as both the translated state updates generated for the received update and any other updates in the queue for the logical entity will refer to that registry object. In this way, the registry enables the updates to refer to shared objects while the desired state image is kept up to date to match the desired state according to the management plane.


The process 300 then identifies (at 330) the MFEs that require the update. In some embodiments, the central network controller (or controller cluster) calculates the span for each logical entity in each logical network, based on the location of the end machines (e.g., virtual machines (VMs), containers, etc.) and the structure of the logical network. In order for the MFEs to perform first-hop processing (i.e., performing all or most of the logical processing for a packet at the first hop), each MFE (or set of MFEs on a host machine) should be configured with all potentially needed information for the logical networks of the end machines on that host machine. Thus, for example, the span of a first logical switch with five connected VMs will be not only the host machines of those five VMs, but also (the host machines for VMs that connect to other logical switches that connect to the same router as the first logical switch. Furthermore, the logical switch may span to gateway machines that perform processing for the logical router, as well as potentially other MFEs. Thus, the number of MFEs that require an update to a particular logical entity may be quite large.


Next, the process 300 creates (at 335) an update, for each identified MFE, that references the shared object in the registry for the logical entity. That is, the network controller generates a separate update for each MFE (local controller) to which the received desired state update will be distributed. In some embodiments, these updates are structured as pointers to the shared object, along with metadata specifying the modification(s) to be made to the logical entity. For instance, an update to remove a logical router would have a reference to the registry object for the logical router along with metadata specifying to delete the object. An update to change the name of the logical router would have a reference to the same object (or a related object for the logical router name) with metadata specifying the new value for the name property. Having numerous updates in separate queues reference the same object reduces the memory used by the network controller, as the objects typically occupy much more memory than the pointers.


The process then adds (at 340) the created updates to the queues for each identified MFE (local controller). The process then ends. The queues are organized as first-in-first-out structures, so that the updates are published to their respective local controllers in the order in which the network controller receives the desired state updates.



FIGS. 4 and 5 conceptually illustrate examples of a network controller 400 receiving updates and managing its desired state image, registry, and local controller queue data structures according to some embodiments. Specifically, FIG. 4 illustrates the network controller 400 receiving and processing an update to add a logical port over four stages 405-420.


As shown at the first stage 405, the network controller 400 stores a desired state image 425, a registry 430, and a set of update queues 435-445 for different local controllers located at different machines. While this example shows only three update queues, it should be understood that in many cases a centralized network controller will provide updates to many (e.g., hundreds or thousands) of local controllers, and will store an update queue for each one. In addition, the controller 400 includes an update handling module 450, which performs the functionality of the update analyzer 210 and queue manager 215 of the network controller 200 of FIG. 2. That is, the update handler 450 performs the process 300 or a similar process to manage the desired state image 425 and registry 430 based on received updates and to generate updates and add them to the queues 435-445.


In the first stage 405, the network controller 400 receives an update 455 from the management plane (not shown). The update 455 specifies to create a new logical port LP1. The logical port update 455 would, in some embodiments, include information about the logical port such as the logical forwarding element (e.g., logical router or logical switch) to which the logical port LP1 belongs, the network addresses (e.g., IP and MAC addresses) associated with the logical port, etc. In some embodiments, the update indicates to which MFE the port belongs (i.e., on which host machine the end machine attached to LP1 operates); in other embodiments, this information is received from the local controller that manages that MFE.


The second stage 410 illustrates that the update handler 450 adds a first object 460 to the desired state image 425 and a second object 465 to the registry 430 for the newly created logical port LP1. Though shown as a single object, in some embodiments multiple objects are added for each logical entity, including a primary object as well as related objects for certain properties of the logical entity. As a result of adding these objects, both the desired state image 425 and registry 430 are larger in the third stage 415 than in the first stage 405.


In the third stage 415, the update handler 450 generates updates 470 and 475 for the queues 435 and 445, respectively. The update handler 450 would have determined that the span of the new logical port included the MFEs corresponding to the first queue 435 and the third queue 445, but not the second queue 440. These updates 435 include references (e.g., pointers) to the object 465 stored in the registry 430, and also include metadata about the logical port (e.g., a create action, and other parameters received with the update 455). As shown in the fourth stage 420, the queues 435 and 445 have increased in size by one update, while the second queue 440 stays static. These updates (labeled LP1) both point to the same object 465 in the registry 430 at this stage.



FIG. 5 illustrates the network controller 400 receiving and processing another update to modify the logical port LP1 over three stages 505-515. The first stage 505 illustrates the controller 400 after some amount of time has elapsed since stage 420 of FIG. 4. The controller has received at least two updates during this time, as the queue 435 has an additional update that has been placed in the queue after the update 470 to create the logical port LP1. The controller has already published the update 475 from the queue 445 to the appropriate local controller, and currently stores two additional updates received after that one. In addition, at the first stage 505 the network controller receives an update 520 specifying modifications to the logical port LP1 (e.g., a different IP address, a change to its name, a firewall rule applying to that port, etc.


At the second stage 510, the update handler 450 generates updates 525 and 530 for the queues 435 and 445, respectively. As these updates also pertain to LP1, they will be sent to the same local controller queues as the first update (assuming there hasn't been any migration of the pertinent end machines from one host to another in the interim). These updates 525 and 530 are structured similarly to the updates 470 and 475, with pointers to the registry object 465, with metadata indicating the specified modifications to the logical port. In the third stage 515, the queues 435 and 445 have increased in size by one update, with the new updates 525 and 530 pointing to the registry object 465 along with the update 470 that is still in the queue 435.


II. Compacting Update Queues and Registry


While using shared objects in the registry does reduce the overall memory load from the publication functions of the network controller, both the registry and any very slow update queues could grow unendingly large without a procedure in place to limit these structures. Thus, some embodiments use a queue compaction procedure that guarantees an upper bound on the overall memory occupied by the combination of the desired state image, the registry, and all of the individual update queues (for a bounded number of desired state logical entities). That is, the overall memory load is guaranteed to be a bounded function of the memory load of the desired state image (which is bounded according to the number of logical entities in the desired state at any given time).



FIG. 6 conceptually illustrates a process 600 of some embodiments for performing this queue and registry compaction to manage the memory load on the network controller. The queue compaction process is performed by the network controller in some embodiments, upon reaching certain conditions. This process 600 will be described in part by reference to FIG. 7, which conceptually illustrates a network controller 700 performing compaction over three stages 705-715. The network controller 700 includes a desired state image 720, a registry 725, and four queues 730-745, functioning as described above.


As shown, the process 600 begins by determining (at 605) that the registry has reached a threshold size. The queue compaction procedure of some embodiments both compacts updates within the queues and removes unnecessary entries from the registry. While the compaction procedure is performed continuously in some embodiments, in order to save processing resources, other embodiments perform the compaction process whenever the registry reaches a specified size. For example, some embodiments perform the compaction process whenever the registry reaches a threshold size relative to the desired state image, such as whenever the registry has twice as many entries as the desired state image. In this case, the process 600 begins when the registry reaches this threshold.


In the first stage 705 of FIG. 7, the network controller 700 receives an update 750 that specifies to delete a logical switch LS2. As shown, at this stage, the desired state image 720 includes three objects, for two logical switches and a logical router (it should be understood that this is a simplistic representation, and that a typical desired state image could include thousands of objects). The registry 725 includes four objects, three of which correspond to the objects in the desired state image 720. The fourth object, for a third logical switch LS3, refers to an object that was previously deleted from the desired state but for which updates may still remain in one or more of the queues 730-745.


As a result of receiving this update 750, the update handler 755 (i) removes the object for LS2 from the desired state image 720 and (ii) adds updates to each of the queues that correspond to local controllers requiring these updates (i.e., based on the span of the logical entity being updated). In this case, updates are added to the queues 730, 735, and 745. The second stage 710 illustrates that the compactor 760 of the network controller 700 determines that after the object for LS2 is removed from the desired state image 720, the registry 725 has become twice the size of the desired state image 720. As a result, the compactor 760 begins the queue and registry compaction process.


Returning to FIG. 6, the process 600 then selects (at 610) an update queue. Different embodiments may select the update queues in different orders (e.g., randomly, based on size, based on names used to identify the different local controllers, etc.). Furthermore, it should be understood that this figure illustrates a conceptual process, and that the processing of the update queues is not necessarily performed serially (i.e., one at a time) as shown in the figure. Instead, some embodiments perform processing to compact the update queues in parallel, or at least partially so.


Next, the process 600 identifies (at 615) sets of updates in the selected queue that refer to the same object in the registry. The process combines (at 620) each such set into a single update by combining the descriptive metadata from the set of updates. That is, if two (or more) pointers in the same queue have separate sets of metadata specifying different modifications to the same referenced object, the compaction procedure combines these updates into a single pointer with all of the metadata from the multiple individual updates. For instance, a first update might create a logical entity and a second update modifies a property of that logical entity. These updates would not necessarily be directly next to each other in the queue, but might instead be separated by updates relating to other logical entities. Detailed examples of this queue compaction will be described below by reference to FIGS. 8-10.


The process 600 then determines (at 625) whether any of the combined updates can be removed from the currently selected queue altogether, and removes (at 630) any such updates from the queue. In certain cases, the network controller determines that the combined update does not need to be sent to the local controller at all, because the end result of the combined metadata specifies a no-op. For instance, when a first update referencing an entry specifies to create the corresponding logical entity and a last update referencing the same entry specifies to delete the corresponding logical entity, these (and any intervening updates referencing the same entry) updates can be removed from the queue, as sending these to the local controller does not serve a purpose. This typically occurs when a queue has a backup (e.g., because a local controller/MFE is slow to implement changes), or when an administrator quickly creates and then deletes a logical entity.


Similarly, any other pair of updates that specifically negate each other are removed in some embodiments. As an example, a first update adding X to a value of a property and a second update subtracting X from the value of the same property results in no change to the value of the property. However, if any additional updates are made to the logical entity then the update cannot be removed from the queue. Furthermore, a first update adding X to a value of a property and a second update adding Y to the value of the property would be combined into a single update specifying to add X and then add Y to the value of the property.


After compacting the updates in the currently selected queue, the process 600 determines (at 635) whether additional queues remain to compact. If so, the process returns to 610 to select a next queue. As noted above, this process is conceptual, and many threads compacting multiple queues in parallel may be carried out in some embodiments. Returning to FIG. 7, in the third stage 715 the queues have grown significantly smaller. The first queue 730 has gone seven updates to two updates, the second queue 735 has gone from five updates to three updates, the third queue 740 has gone from five updates to one update, and the fourth queue 745 has gone from eight updates to three updates. This could include both compacting multiple updates for a logical entity into a single update as well as completely removing updates (e.g., if any of the queues still had the update creating LS2 or LS3, in addition to the updates removing these logical switches.


Returning to FIG. 6, after all of the queues have been compacted (both combining updates and removing updates), the process 600 identifies (at 640) objects in the registry that (i) do not have corresponding objects in the desired state image and (ii) are not referenced by any updates to the queues. The process then removes (at 645) these identified objects from the registry, and ends. These identified objects are those that correspond to logical entities which have been removed from the desired state and for which those updates deleting the logical entity have been published from all of the queues (or deleted from the queue because the update creating the logical entity was never published). In some embodiments, the network controller removes objects from the registry whenever these conditions are met, not only as part of the compaction process. However, the compaction process may result in the removal of updates such that the second condition is met for additional objects.


In FIG. 7, the third stage 715 also shows that the compactor 760 removed an entry from the registry 725, so that it has three objects rather than four objects (i.e., the object for LS3 was removed). In this example, if each object uses 10 kB of space and each update (pointer) uses 8 bytes of space, then the total size of the data structures was 60 kB of objects and 200 bytes of updates before compaction. After compaction, this has been reduced to 50 kB of objects and 72 bytes of updates. While this may not seem large (because each update pointer does not occupy a large amount of memory), in a real-world scenario hundreds of objects could be removed (500 objects would reduce the memory usage by 5 MB) and thousands (or even millions) of updates removed/compacted over a thousand queues (10000 pointes would reduce memory usage by 80 kB).



FIGS. 8-10 provide examples of queue compaction, some of which result in the removal of updates for different reasons. FIG. 8 specifically illustrates an update queue 800 that is compacted and from which a set of updates is deleted, over three stages 805-815. In the first stage (pre-compaction), the queue 800 contains eight separate updates pertaining to six different logical entities (two logical switches LS1 and LS3, a logical router LR4, and three logical ports LP3, LP7, and LP9). These eight separate updates include three updates 820-830 that pertain to the logical switch LS1: a first update 820 creating the logical switch, a second update 825 assigning a name to the logical switch, and a third update 830 deleting the logical switch.


In the second stage 810, after compaction, these three updates 820-830 have been combined into a single update 835, while the other five updates remain the same. In addition, this update 835 starts with the creation of the logical switch and finishes with the deletion of the logical switch. As such, the update can be removed, because there is no benefit in having the update published to the local controller. Thus, the third stage 815 shows that the update is removed from the queue 800. This deletion provides memory savings at the centralized controller as well as processing savings at the local controller that no longer needs to process this update. In addition, there is the possibility that the object corresponding to LS1 will be removed from the registry.



FIG. 9 illustrates an update queue 900 that is compacted without deleting any updates, over two stages 905-910. In this example, the first stage shows an update queue with seven updates pertaining to five different logical entities (one logical switch LS1, one logical router LR4, and three logical ports LP3, LP7, and LP9). Two of the updates 915 and 920 pertain to the logical switch LS1 and two of the updates 925 and 930 pertain to the logical port LP3. Thus, each of these pairs of updates is compacted into a single update in the second stage 910. The single update 935 for LS1 specifies to create and name that logical switch, while the single update for 940 for LP3 specifies to create and name the logical port. Neither of these updates can be removed, as they need to be propagated to the local controller.



FIG. 10 illustrates an update queue 1000 that is compacted and from which a set of updates is deleted, over three stages 1005-1015. In the first stage 1000, the update queue includes seven updates, pertaining to five logical entities. Two of the updates 1020 and 1025 pertain to the logical switch LS1 and two of the updates 1030 and 1035 pertain to the logical port LP3. Thus, each of these pairs of updates is compacted into a single update in the second stage 1010. The single update 1040 for the logical switch LS1 specifies to create the logical switch and provide a name for the logical switch. The single update 145 for the logical port LP3 first adds five to a value pertaining to that logical port and then subtracts five to the same value. Because this results in a no-op (add zero), some embodiments remove the update from the queue, and thus the third stage 1015 illustrates the queue 1000 as storing only four updates.



FIG. 11 conceptually illustrates a controller 1100 over three stages 1105-1115 in which updates are compacted and objects are then deleted from the registry. The first stage 1105 illustrates that the network controller 1100 includes a desired state image 1120, a registry 1125, and three queues 1130-1140. The desired state image 1120 includes objects for logical switch LS1 and logical router LR1, while the registry 1125 includes objects for logical router LR1 and three logical switches LS1, LS2, and LS3. The queues include various updates, which for simplicity relate to either creation or deletion of logical entities.


The second stage 1110 illustrates the controller 1100 after the queues 1130-1140 have been compacted. The first queue 1130 had updates to create logical router LR1, delete logical switch LS3, create logical switch LS2, create logical switch LS1, and delete logical switch LS2. As a result of compaction, the creation and deletion of logical switch LS2 has been removed from the queue 1130. The second queue 1135 had updates to delete logical switch LS3, create logical switch LS2, and delete logical switch LS2. As with the first queue 1130, the compaction removes the creation and deletion of logical switch LS2 from the queue 1135. The third queue 1140 stays the same, as it only had one update in the first place (to remove logical switch LS3).


In the third stage 1115, the controller 1100 has removed the object for logical switch LS2 from the registry 1125. This is possible because (i) there is no object in the desired state image 1120 for logical switch LS2 and (ii) none of the queues have updates referring to the object for logical switch LS2 any longer, as a result of compaction. The logical switch LS3, on the other hand, satisfies the first criteria (no corresponding object in the registry), but its object cannot be removed from the registry because the updates for deleting this logical switch still remain in some of the queues. However, once these updates are published from the three queues 1130-1140, then the corresponding object can be removed from the registry 1125.


III. Electronic System


Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.


In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.



FIG. 12 conceptually illustrates an electronic system 1200 with which some embodiments of the invention are implemented. The electronic system 1200 can be used to execute any of the control, virtualization, or operating system applications described above. The electronic system 1200 may be a computer (e.g., a desktop computer, personal computer, tablet computer, server computer, mainframe, a blade computer etc.), phone, PDA, or any other sort of electronic device. Such an electronic system includes various types of computer readable media and interfaces for various other types of computer readable media. Electronic system 1200 includes a bus 1205, processing unit(s) 1210, a system memory 1225, a read-only memory 1230, a permanent storage device 1235, input devices 1240, and output devices 1245.


The bus 1205 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the electronic system 1200. For instance, the bus 1205 communicatively connects the processing unit(s) 1210 with the read-only memory 1230, the system memory 1225, and the permanent storage device 1235.


From these various memory units, the processing unit(s) 1210 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments.


The read-only-memory (ROM) 1230 stores static data and instructions that are needed by the processing unit(s) 1210 and other modules of the electronic system. The permanent storage device 1235, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the electronic system 1200 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1235.


Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 1235, the system memory 1225 is a read-and-write memory device. However, unlike storage device 1235, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1225, the permanent storage device 1235, and/or the read-only memory 1230. From these various memory units, the processing unit(s) 1210 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.


The bus 1205 also connects to the input and output devices 1240 and 1245. The input devices enable the user to communicate information and select commands to the electronic system. The input devices 1240 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1245 display images generated by the electronic system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.


Finally, as shown in FIG. 12, bus 1205 also couples electronic system 1200 to a network 1265 through a network adapter (not shown). In this manner, the computer can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet, or a network of networks, such as the Internet. Any or all components of electronic system 1200 may be used in conjunction with the invention.


Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.


While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.


As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral signals.


This specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.


VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.


Hypervisor kernel network interface modules, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.


It should be understood that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. In addition, a number of the figures (including FIGS. 3 and 6) conceptually illustrate processes. The specific operations of these processes may not be performed in the exact order shown and described. The specific operations may not be performed in one continuous series of operations, and different specific operations may be performed in different embodiments. Furthermore, the process could be implemented using several sub-processes, or as part of a larger macro process. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. For a network controller that manages a plurality of managed forwarding elements (MFEs) that implement a logical network comprising a plurality of logical entities, a method for efficiently distributing configuration data to the MFEs, the method comprising: receiving a logical network configuration update specifying a set of modifications to a particular logical entity of the logical network;when no entry exists for the particular logical entity in a first data structure that comprises an entry for each logical entity of the logical network, adding (i) an entry for the particular logical entity in the first data structure and (ii) an entry for the particular logical entity in a second data structure that comprises entries for each logical entity referred to by an update for at least one MFE; andadding MFE configuration updates to a plurality of different queues for different MFEs that require the logical network configuration update; anddistributing the MFE configuration updates to at least a set of the different MFEs from the different queues, the different queues allowing the MFE configuration updates to be sent to the MFEs at different times to prevent any one MFE from causing a bottleneck for distributing the MFE configuration updates to other MFEs.
  • 2. The method of claim 1, wherein the logical entities comprise logical forwarding elements and logical ports of logical forwarding elements.
  • 3. The method of claim 1, wherein the logical network configuration update specifies creation of the particular logical entity.
  • 4. The method of claim 3, wherein the logical network configuration update is a first logical network configuration update and the MFE configuration updates are a first set of MFE configuration updates, the method further comprising receiving a second logical network configuration update specifying deletion of the particular logical entity.
  • 5. The method of claim 4, wherein: distributing the first-set MFE configuration updates to at least a set of the different MFEs from the different queues comprises, prior to receiving the second logical network configuration update, distributing the first-set MFE configuration update from a particular queue for a particular MFE to a local controller for the particular MFE; andthe method further comprises, after receiving the second logical network configuration update: adding a second set of MFE configuration updates to the queues for the MFEs that require the second logical network configuration update;distributing the second-set MFE configuration update from the particular queue to the local controller for the particular MFE; andremoving the entries for the particular logical entity in the first and second data structures once the second set of MFE updates have been distributed to the local controllers for the MFEs that require the second logical network configuration update.
  • 6. The method of claim 4, wherein a first-set update remains in a particular queue for a particular MFE when the second logical network configuration update is received, the method further comprising, after receiving the second update: adding a second set of MFE configuration updates to the queue for the MFEs that require the second logical network configuration update;combining the first-set MFE update and the second-set MFE update in the particular queue into a single MFEs configuration update for the particular logical entity; andremoving the combined MFE configuration update for the particular logical entity from the particular queue.
  • 7. The method of claim 1, wherein each MFE configuration update comprises (i) a reference to the entry for the particular logical entity in the second data structure and (ii) metadata indicating the specified set of modifications to the particular logical entity.
  • 8. For a network controller that manages a plurality of managed forwarding elements (MFEs) that implement a logical network comprising a plurality of logical entities, a method comprising: receiving a logical network configuration update specifying a set of modifications to a particular logical entity of the logical network;when no entry exists for the particular logical entity in a first data structure that represents a desired state of the logical network and comprises an entry for each logical entity of the logical network, adding (i) an entry for the particular logical entity in the first data structure and (ii) an entry for the particular logical entity in a second data structure that comprises entries for each logical entity referred to by an update for at least one MFE; andadding MFE configuration updates to queues for MFEs that require the logical network configuration update.
  • 9. The method of claim 8, wherein the first data structure representing the desired state of the logical network comprises, in addition to entries for each logical entity of the logical network, entries for sets of properties of at least a subset of the logical entities.
  • 10. The method of claim 8, wherein the network controller stores a separate queue of MFE configuration updates for each MFE of the plurality of MFEs.
  • 11. A non-transitory machine-readable medium storing a network controller application which when executed by at least one processing unit efficiently distributes configuration data to a plurality of managed forwarding elements (MFEs) that implement a logical network comprising a plurality of logical entities, the network controller application comprising sets of instructions for: receiving a logical network configuration update specifying a set of modifications to a particular logical entity of the logical network;when no entry exists for the particular logical entity in a first data structure that comprises an entry for each logical entity of the logical network, adding (i) an entry for the particular logical entity in the first data structure and (ii) an entry for the particular logical entity in a second data structure that comprises entries for each logical entity referred to by an update for at least one MFE; andadding MFE configuration updates to a plurality of different queues for different MFEs that require the logical network configuration update; anddistributing the MFE configuration updates to at least a set of the different MFEs from the different queues, the different queues allowing the MFE configuration updates to be sent to the MFEs at different times to prevent any one MFE from causing a bottleneck for distributing the MFE configuration updates to other MFEs.
  • 12. The non-transitory machine-readable medium of claim 11, wherein the logical entities comprise logical forwarding elements and logical ports of logical forwarding elements.
  • 13. The non-transitory machine-readable medium of claim 11, wherein the logical network configuration update specifies creation of the particular logical entity.
  • 14. The non-transitory machine-readable medium of claim 13, wherein the logical network configuration update is a first logical network configuration update and the MFE configuration updates are a first set of MFE configuration updates, the program further comprising a set of instructions for receiving a second logical network configuration update specifying deletion of the particular logical entity.
  • 15. The non-transitory machine-readable medium of claim 14, wherein: the set of instructions for distributing the first-set MFE configuration updates to at least a set of the different MFEs from the different queues comprises a set of instructions for, prior to receiving the second logical network configuration update, distributing the first-set MFE configuration update from a particular queue for a particular MFE to a local controller for the particular MFE;the network controller application further comprises sets of instructions for, after receiving the second logical network configuration update: adding a second set of MFE configuration updates to the queues for the MFEs that require the second logical network configuration update;distributing the second-set MFE configuration update from the particular queue to the local controller for the particular MFE; andremoving the entries for the particular logical entity in the first and second data structures once the second set of MFE updates have been distributed to the local controllers for the MFEs that require the second logical network configuration update.
  • 16. The non-transitory machine-readable medium of claim 14, wherein a first-set update remains in a particular queue for a particular MFE when the second logical network configuration update is received, wherein the network controller application further comprises sets of instructions for, after receiving the second update: adding a second set of MFE configuration updates to the queue for the MFEs that require the second logical network configuration update;combining the first-set MFE update and the second-set MFE update in the particular queue into a single MFE configuration update for the particular logical entity; andremoving the combined MFE configuration update for the particular logical entity from the particular queue.
  • 17. The non-transitory machine-readable medium of claim 11, wherein each MFE configuration update comprises (i) a reference to the entry for the particular logical entity in the second data structure and (ii) metadata indicating the specified set of modifications to the particular logical entity.
  • 18. A non-transitory machine-readable medium storing a network controller application which when executed by at least one processing unit manages a plurality of managed forwarding elements (MFEs) that implement a logical network comprising a plurality of logical entities, the network controller application comprising sets of instructions for: receiving a logical network configuration update specifying a set of modifications to a particular logical entity of the logical network;when no entry exists for the particular logical entity in a first data structure that represents a desired state of the logical network and comprises an entry for each logical entity of the logical network, adding (i) an entry for the particular logical entity in the first data structure and (ii) an entry for the particular logical entity in a second data structure that comprises entries for each logical entity referred to by an update for at least one MFE; andadding MFE configuration updates to queues for MFEs that require the logical network configuration update.
  • 19. The non-transitory machine-readable medium of claim 18, wherein the first data structure representing the desired state of the logical network comprises, in addition to entries for each logical entity of the logical network, entries for sets of properties of at least a subset of the logical entities.
  • 20. The non-transitory machine-readable medium of claim 18, wherein the network controller stores a separate queue of MFE configuration updates for each MFE of the plurality of MFEs.
  • 21. The non-transitory machine-readable medium of claim 18, wherein each MFE configuration update comprises (i) a reference to the entry for the particular logical entity in the second data structure and (ii) metadata indicating the specified set of modifications to the particular logical entity.
  • 22. The method of claim 8, wherein each MFE configuration update comprises (i) a reference to the entry for the particular logical entity in the second data structure and (ii) metadata indicating the specified set of modifications to the particular logical entity.
CLAIM OF BENEFIT TO PRIOR APPLICATIONS

This application is a continuation of U.S. patent application Ser. No. 15/143,462, filed Apr. 29, 2016, now published as U.S. Patent Publication 2017/0318113. U.S. patent application Ser. No. 15/143,462, now published as U.S. Patent Publication 2017/0318113, is incorporated herein by reference.

US Referenced Citations (330)
Number Name Date Kind
5426774 Banerjee et al. Jun 1995 A
5504921 Dev et al. Apr 1996 A
5550816 Hardwick et al. Aug 1996 A
5729685 Chatwani et al. Mar 1998 A
5751967 Raab et al. May 1998 A
5796936 Watabe et al. Aug 1998 A
5805791 Grossman et al. Sep 1998 A
6055243 Vincent et al. Apr 2000 A
6104699 Holender et al. Aug 2000 A
6219699 McCloghrie et al. Apr 2001 B1
6366582 Nishikado et al. Apr 2002 B1
6512745 Abe et al. Jan 2003 B1
6539432 Taguchi et al. Mar 2003 B1
6680934 Cain Jan 2004 B1
6768740 Perlman et al. Jul 2004 B1
6785843 McRae et al. Aug 2004 B1
6862263 Simmons Mar 2005 B1
6941487 Balakrishnan et al. Sep 2005 B1
6963585 Pennec et al. Nov 2005 B1
6999454 Crump Feb 2006 B1
7042912 Smith et al. May 2006 B2
7046630 Abe et al. May 2006 B2
7096228 Theimer et al. Aug 2006 B2
7120728 Krakirian et al. Oct 2006 B2
7126923 Yang et al. Oct 2006 B1
7197572 Matters et al. Mar 2007 B2
7200144 Terrell et al. Apr 2007 B2
7209439 Rawlins et al. Apr 2007 B2
7263290 Fortin et al. Aug 2007 B2
7283473 Arndt et al. Oct 2007 B2
7286490 Saleh et al. Oct 2007 B2
7342916 Das et al. Mar 2008 B2
7343410 Mercier et al. Mar 2008 B2
7359971 Jorgensen Apr 2008 B2
7450598 Chen et al. Nov 2008 B2
7460482 Pike Dec 2008 B2
7478173 Delco Jan 2009 B1
7483370 Dayal et al. Jan 2009 B1
7512744 Banga et al. Mar 2009 B2
7555002 Arndt et al. Jun 2009 B2
7606260 Oguchi et al. Oct 2009 B2
7627692 Pessi Dec 2009 B2
7649851 Takashige et al. Jan 2010 B2
7710874 Balakrishnan et al. May 2010 B2
7730486 Herington Jun 2010 B2
7764599 Doi et al. Jul 2010 B2
7792987 Vohra et al. Sep 2010 B1
7818452 Matthews et al. Oct 2010 B2
7826482 Minei et al. Nov 2010 B1
7839847 Nadeau et al. Nov 2010 B2
7885276 Lin Feb 2011 B1
7903666 Kumar et al. Mar 2011 B1
7929424 Kochhar et al. Apr 2011 B2
7936770 Frattura et al. May 2011 B1
7937438 Miller et al. May 2011 B1
7948986 Ghosh et al. May 2011 B1
7953865 Miller et al. May 2011 B1
7991859 Miller et al. Aug 2011 B1
7995483 Bayar et al. Aug 2011 B1
8010696 Sankaran et al. Aug 2011 B2
8027354 Portolani et al. Sep 2011 B1
8031633 Bueno et al. Oct 2011 B2
8046456 Miller et al. Oct 2011 B1
8054832 Shukla et al. Nov 2011 B1
8055789 Richardson et al. Nov 2011 B2
8060779 Beardsley et al. Nov 2011 B2
8060875 Lambeth Nov 2011 B1
8068408 Ansari et al. Nov 2011 B2
8089871 Iloglu et al. Jan 2012 B2
8131852 Miller et al. Mar 2012 B1
8144630 Orr Mar 2012 B1
8149737 Metke et al. Apr 2012 B2
8155028 Abu-Hamdeh et al. Apr 2012 B2
8166201 Richardson et al. Apr 2012 B2
8199750 Schultz et al. Jun 2012 B1
8223668 Allan et al. Jul 2012 B2
8224931 Brandwine et al. Jul 2012 B1
8224971 Miller et al. Jul 2012 B1
8230050 Brandwine et al. Jul 2012 B1
8239572 Brandwine et al. Aug 2012 B1
8265075 Pandey Sep 2012 B2
8312129 Miller et al. Nov 2012 B1
8320388 Louati et al. Nov 2012 B2
8321561 Fujita et al. Nov 2012 B2
8339959 Moisand et al. Dec 2012 B1
8339994 Gnanasekaran et al. Dec 2012 B2
8351418 Zhao et al. Jan 2013 B2
8422359 Nakajima Apr 2013 B2
8456984 Ranganathan et al. Jun 2013 B2
8504718 Wang et al. Aug 2013 B2
8565108 Marshall et al. Oct 2013 B1
8578003 Brandwine et al. Nov 2013 B2
8605734 Ichino Dec 2013 B2
8612627 Brandwine Dec 2013 B1
8621058 Eswaran et al. Dec 2013 B2
8644188 Brandwine et al. Feb 2014 B1
8705513 Merwe et al. Apr 2014 B2
8717895 Koponen et al. May 2014 B2
8750119 Lambeth et al. Jun 2014 B2
8750288 Nakil et al. Jun 2014 B2
8762501 Kempf et al. Jun 2014 B2
8958298 Zhang et al. Feb 2015 B2
8959215 Koponen et al. Feb 2015 B2
9007903 Koponen et al. Apr 2015 B2
9083609 Casado et al. Jul 2015 B2
9124538 Koponen et al. Sep 2015 B2
9137102 Miller et al. Sep 2015 B1
9137107 Koponen et al. Sep 2015 B2
9154433 Koponen et al. Oct 2015 B2
9172603 Padmanabhan et al. Oct 2015 B2
9178833 Koponen et al. Nov 2015 B2
9203701 Koponen et al. Dec 2015 B2
9252972 Dukes et al. Feb 2016 B1
9253109 Koponen et al. Feb 2016 B2
9306843 Koponen et al. Apr 2016 B2
9319337 Koponen et al. Apr 2016 B2
9319338 Padmanabhan et al. Apr 2016 B2
9331937 Koponen et al. May 2016 B2
9354989 Sehgal et al. May 2016 B1
9391880 Koide Jul 2016 B2
9602421 Koponen et al. Mar 2017 B2
9722871 Miller et al. Aug 2017 B2
9838336 Koide Dec 2017 B2
9843476 Koponen et al. Dec 2017 B2
9887960 Chanda et al. Feb 2018 B2
9923760 Shakimov et al. Mar 2018 B2
9967134 Shakimov et al. May 2018 B2
10033579 Koponen et al. Jul 2018 B2
10069646 Shen et al. Sep 2018 B2
10135676 Koponen et al. Nov 2018 B2
10204122 Shakimov et al. Feb 2019 B2
10313255 Matthews Jun 2019 B1
11019167 Ganichev et al. May 2021 B2
20010043614 Viswanadham et al. Nov 2001 A1
20010044825 Barritz Nov 2001 A1
20020034189 Haddock et al. Mar 2002 A1
20020093952 Gonda Jul 2002 A1
20020161867 Cochran et al. Oct 2002 A1
20020194369 Rawlins et al. Dec 2002 A1
20030041170 Suzuki Feb 2003 A1
20030058850 Rangarajan et al. Mar 2003 A1
20030069972 Yoshimura et al. Apr 2003 A1
20030093481 Mitchell et al. May 2003 A1
20030204768 Fee Oct 2003 A1
20030233385 Srinivasa et al. Dec 2003 A1
20040044773 Bayus et al. Mar 2004 A1
20040047286 Larsen et al. Mar 2004 A1
20040054680 Kelley et al. Mar 2004 A1
20040073659 Rajsic et al. Apr 2004 A1
20040098505 Clemmensen May 2004 A1
20040101274 Foisy et al. May 2004 A1
20040267866 Carollo et al. Dec 2004 A1
20040267897 Hill et al. Dec 2004 A1
20050018669 Arndt et al. Jan 2005 A1
20050021683 Newton et al. Jan 2005 A1
20050027881 Figueira et al. Feb 2005 A1
20050038834 Souder et al. Feb 2005 A1
20050083953 May Apr 2005 A1
20050120160 Plouffe et al. Jun 2005 A1
20050132044 Guingo et al. Jun 2005 A1
20050147095 Guerrero et al. Jul 2005 A1
20050220096 Friskney et al. Oct 2005 A1
20050228952 Mayhew et al. Oct 2005 A1
20060002370 Rabie et al. Jan 2006 A1
20060018253 Windisch et al. Jan 2006 A1
20060026225 Canali et al. Feb 2006 A1
20060028999 Iakobashvili et al. Feb 2006 A1
20060092940 Ansari et al. May 2006 A1
20060092976 Lakshman et al. May 2006 A1
20060174087 Hashimoto et al. Aug 2006 A1
20060182033 Chen et al. Aug 2006 A1
20060182037 Chen et al. Aug 2006 A1
20060184937 Abels et al. Aug 2006 A1
20060193266 Siddha et al. Aug 2006 A1
20060221961 Basso et al. Oct 2006 A1
20060248449 Williams et al. Nov 2006 A1
20070005627 Dodge Jan 2007 A1
20070043860 Pabari Feb 2007 A1
20070156919 Potti et al. Jul 2007 A1
20070220358 Goodill et al. Sep 2007 A1
20070239944 Rupanagunta et al. Oct 2007 A1
20070239987 Hoole et al. Oct 2007 A1
20070260721 Bose et al. Nov 2007 A1
20070286185 Eriksson et al. Dec 2007 A1
20070297428 Bose et al. Dec 2007 A1
20080002579 Lindholm et al. Jan 2008 A1
20080002683 Droux et al. Jan 2008 A1
20080034249 Husain et al. Feb 2008 A1
20080040467 Mendiratta et al. Feb 2008 A1
20080049614 Briscoe et al. Feb 2008 A1
20080049621 McGuire et al. Feb 2008 A1
20080059556 Greenspan et al. Mar 2008 A1
20080071900 Hecker et al. Mar 2008 A1
20080086726 Griffith et al. Apr 2008 A1
20080133687 Fok et al. Jun 2008 A1
20080159301 Heer Jul 2008 A1
20080163207 Reumann et al. Jul 2008 A1
20080165704 Marchetti et al. Jul 2008 A1
20080189769 Casado et al. Aug 2008 A1
20080212963 Fortin et al. Sep 2008 A1
20080225853 Melman et al. Sep 2008 A1
20080240122 Richardson et al. Oct 2008 A1
20080291910 Tadimeti et al. Nov 2008 A1
20080301303 Matsuoka Dec 2008 A1
20090001845 Ikehashi Jan 2009 A1
20090031041 Clemmensen Jan 2009 A1
20090043823 Iftode et al. Feb 2009 A1
20090070501 Kobayashi et al. Mar 2009 A1
20090083445 Ganga Mar 2009 A1
20090113031 Ruan et al. Apr 2009 A1
20090122710 Bar-Tor et al. May 2009 A1
20090132691 Daurensan et al. May 2009 A1
20090150521 Tripathi Jun 2009 A1
20090150527 Tripathi et al. Jun 2009 A1
20090161547 Riddle et al. Jun 2009 A1
20090245793 Chiang Oct 2009 A1
20090249470 Litvin et al. Oct 2009 A1
20090249473 Cohn Oct 2009 A1
20090276661 Deguchi et al. Nov 2009 A1
20090279536 Unbehagen et al. Nov 2009 A1
20090279549 Ramanathan et al. Nov 2009 A1
20090292858 Lambeth et al. Nov 2009 A1
20090300210 Ferris Dec 2009 A1
20090303880 Maltz et al. Dec 2009 A1
20100002722 Porat et al. Jan 2010 A1
20100046530 Hautakorpi et al. Feb 2010 A1
20100046531 Louati et al. Feb 2010 A1
20100058106 Srinivasan et al. Mar 2010 A1
20100061231 Harmatos et al. Mar 2010 A1
20100115101 Lain et al. May 2010 A1
20100125667 Soundararajan May 2010 A1
20100131636 Suri et al. May 2010 A1
20100153554 Anschutz et al. Jun 2010 A1
20100162036 Linden et al. Jun 2010 A1
20100165877 Shukla et al. Jul 2010 A1
20100169467 Shukla et al. Jul 2010 A1
20100191848 Fujita et al. Jul 2010 A1
20100205479 Akutsu et al. Aug 2010 A1
20100214949 Smith et al. Aug 2010 A1
20100257263 Casado et al. Oct 2010 A1
20100275199 Smith et al. Oct 2010 A1
20100290485 Martini et al. Nov 2010 A1
20100322255 Hao et al. Dec 2010 A1
20110016215 Wang Jan 2011 A1
20110026521 Gamage et al. Feb 2011 A1
20110032830 Merwe et al. Feb 2011 A1
20110075664 Lambeth et al. Mar 2011 A1
20110075674 Li et al. Mar 2011 A1
20110080870 Bhalla et al. Apr 2011 A1
20110085557 Gnanasekaran et al. Apr 2011 A1
20110085559 Chung et al. Apr 2011 A1
20110103259 Aybay et al. May 2011 A1
20110119748 Edwards et al. May 2011 A1
20110134931 Merwe et al. Jun 2011 A1
20110142053 Merwe et al. Jun 2011 A1
20110173490 Narayanaswamy et al. Jul 2011 A1
20110261825 Ichino Oct 2011 A1
20110273988 Tourrilhes et al. Nov 2011 A1
20110296052 Guo et al. Dec 2011 A1
20110299534 Koganti et al. Dec 2011 A1
20110299537 Saraiya et al. Dec 2011 A1
20110305167 Koide Dec 2011 A1
20110310899 Alkhatib et al. Dec 2011 A1
20110317559 Kern et al. Dec 2011 A1
20110317701 Yamato et al. Dec 2011 A1
20120014386 Xiong et al. Jan 2012 A1
20120147894 Mulligan et al. Jun 2012 A1
20120151550 Zhang Jun 2012 A1
20120158942 Kalusivalingam et al. Jun 2012 A1
20120185553 Nelson Jul 2012 A1
20120195187 Ashihara et al. Aug 2012 A1
20120236734 Sampath et al. Sep 2012 A1
20120239790 Doane et al. Sep 2012 A1
20130024579 Zhang et al. Jan 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130044752 Koponen et al. Feb 2013 A1
20130054761 Kempf et al. Feb 2013 A1
20130058339 Casado et al. Mar 2013 A1
20130058346 Sridharan et al. Mar 2013 A1
20130060940 Koponen et al. Mar 2013 A1
20130103817 Koponen et al. Apr 2013 A1
20130103818 Koponen et al. Apr 2013 A1
20130114466 Koponen et al. May 2013 A1
20130117428 Koponen et al. May 2013 A1
20130117429 Koponen et al. May 2013 A1
20130125230 Koponen et al. May 2013 A1
20130163427 Beliveau et al. Jun 2013 A1
20130163475 Beliveau et al. Jun 2013 A1
20130208623 Koponen et al. Aug 2013 A1
20130211549 Thakkar et al. Aug 2013 A1
20130212148 Koponen et al. Aug 2013 A1
20130212235 Fulton et al. Aug 2013 A1
20130212243 Thakkar et al. Aug 2013 A1
20130212244 Koponen et al. Aug 2013 A1
20130212245 Koponen et al. Aug 2013 A1
20130212246 Koponen et al. Aug 2013 A1
20130219037 Thakkar et al. Aug 2013 A1
20130219078 Padmanabhan et al. Aug 2013 A1
20130227097 Yasuda et al. Aug 2013 A1
20130332602 Nakil et al. Dec 2013 A1
20130332619 Xie et al. Dec 2013 A1
20140016501 Kamath et al. Jan 2014 A1
20140019639 Ueno Jan 2014 A1
20140040466 Yang Feb 2014 A1
20140109037 Ouali Apr 2014 A1
20140115406 Agrawal et al. Apr 2014 A1
20140115578 Cooper Apr 2014 A1
20140189212 Slaight et al. Jul 2014 A1
20140247753 Koponen et al. Sep 2014 A1
20140348161 Koponen et al. Nov 2014 A1
20140351432 Koponen et al. Nov 2014 A1
20150009804 Koponen et al. Jan 2015 A1
20150052262 Chanda Feb 2015 A1
20150089032 Agarwal et al. Mar 2015 A1
20150263946 Tubaltsev et al. Sep 2015 A1
20150263952 Ganichev et al. Sep 2015 A1
20150304213 Ashihara et al. Oct 2015 A1
20150341205 Invernizzi et al. Nov 2015 A1
20160021028 Koide Jan 2016 A1
20160050117 Voellmy et al. Feb 2016 A1
20160092259 Mehta et al. Mar 2016 A1
20160119224 Ramachandran et al. Apr 2016 A1
20160197774 Koponen et al. Jul 2016 A1
20160218973 Anand Jul 2016 A1
20160294604 Shakimov et al. Oct 2016 A1
20160294680 Shakimov et al. Oct 2016 A1
20170091004 Shakimov et al. Mar 2017 A1
20170318113 Ganichev et al. Nov 2017 A1
20180083829 Koponen et al. Mar 2018 A1
20190188193 Shakimov et al. Jun 2019 A1
Foreign Referenced Citations (31)
Number Date Country
101136866 Mar 2008 CN
101437326 May 2009 CN
0737921 Oct 1996 EP
0887981 Dec 1998 EP
1443423 Aug 2004 EP
1653688 May 2006 EP
2838244 Feb 2015 EP
2485866 May 2012 GB
H07327050 Dec 1995 JP
H09266493 Oct 1997 JP
2003069609 Mar 2003 JP
2003124976 Apr 2003 JP
2003318949 Nov 2003 JP
2006229967 Aug 2006 JP
2009159636 Jul 2009 JP
2011081588 Apr 2011 JP
2011166384 Aug 2011 JP
2011166700 Aug 2011 JP
2005112390 Nov 2005 WO
2008095010 Aug 2008 WO
2009001845 Dec 2008 WO
2009042919 Apr 2009 WO
2010103909 Sep 2010 WO
2011080870 Jul 2011 WO
2011083780 Jul 2011 WO
2012093429 Jul 2012 WO
2013158917 Oct 2013 WO
2013158918 Oct 2013 WO
2013158920 Oct 2013 WO
2013184846 Dec 2013 WO
2017189061 Nov 2017 WO
Non-Patent Literature Citations (14)
Entry
Berde, Pankaj, et al., “ONOS Open Network Operating System An Open-Source Distributed SDN OS,” Dec. 19, 2013, 34 pages.
Casado, Martin, et al. “Ethane: Taking Control of the Enterprise,” SIGCOMM'07, Aug. 27-31, 2007, 12 pages, ACM, Kyoto, Japan.
Ciavaglia, Laurent, et al., “An Architectural Reference Model for Autonomic Networking, Cognitive Networking and Self-management,” Draft ETSI GS AFI 002 V0.0.17, Mar. 2012, 179 pages, http://www.etsi.org.
Enns, R., “NETCONF Configuration Protocol,” Dec. 2006, 96 pages, RFC 4741, IETF Trust.
Greenberg, Albert, et al., “A Clean Slate 4D Approach to Network Control and Management,” ACM SIGCOMM Computer Communication Review, Oct. 2005, 12 pages, vol. 35, No. 5, ACM, New York, USA.
Greenberg, Albert, et al., “VL2: A Scalable and Flexible Data Center Network,” SIGCOMM '09, Aug. 17-21, 2009, 12 pages, ACM, Barcelona, Spain.
Gude, Natasha, et al., “NOX: Towards an Operating System for Networks,” ACM SIGCOMM Computer Communication Review, Jul. 2008, 6 pages, vol. 38, No. 3, ACM.
Kent, William, “A Simple Guide to Five Normal Forms in Relational Database Theory,” Communications of the ACM, Feb. 1, 1983, 6 pages, vol. 26, No. 2, Association for Computing Machinery, Inc., USA.
Koponen, Teemu, et al., “Onix: A Distributed Control Platform for Large-scale Production Networks,” In Proc. OSDI, Oct. 2010, 14 pages.
Krishnaswamy, Umesh, et al., “ONOS Open Network Operating System—An Experimental Open-Source Distributed SDN OS,” Apr. 16, 2013, 24 pages.
Reitblatt, Mark, et al., “Consistent Updates for Software-Defined Networks: Change You Can Believe in!” Proceedings of the 10th ACM Workshop on Hot Topics in Networks, Nov. 14-15, 2011, 6 pages, ACM, Cambridge, MA.
Schneider, Fred B., “Implementing Fault-Tolerant Services Using the State Machine Approach: A Tutorial,” ACM Computing Surveys, Dec. 1990, 21 pages, vol. 22, No. 4, ACM.
Terry, Douglas B., “Managing Update Conflicts in Bayou, a Weakly Connected Replicated Storage System,” SIGOPS '95, Dec. 1995, 12 pages, ACM, Colorado, USA.
Wang, Wei-Ming, et al., “Analysis and Implementation of an Open Programmable Router Based on Forwarding and Control Element Separation,” Journal of Computer Science and Technology, Sep. 2008, 11 pages, vol. 23, No. 5.
Related Publications (1)
Number Date Country
20210258397 A1 Aug 2021 US
Continuations (1)
Number Date Country
Parent 15143462 Apr 2016 US
Child 17308922 US