With the recent increase in cloud native applications, today there is more demand than ever for fast deployment of on-demand networking for connecting machines that are deployed in software defined datacenters (SDDC). It is desirable to provide auto-deployed networking solutions as many compute-cluster administrators do not have extensive knowledge of networking. However, for administrators who wish to adjust their system's networking, it is desirable to provide such administrators with the ability to configure and customize their network deployments.
Additionally, an administrator may wish to efficiently allocate resources and automate the application of certain policies among a number of related compute clusters while maintain system visibility for analytics.
Some embodiments of the invention provide a novel network architecture for deploying guest clusters (GCs) including workload machines for a tenant (or other entity) within an availability zone (e.g., a datacenter providing a set of hardware resources). The novel network architecture includes a virtual private cloud (VPC) deployed in the availability zone (AZ) that includes a centralized routing element that provides access to a gateway routing element, or set of gateway routing elements, of the AZ. In some embodiments, the centralized routing element provides a set of services for packets traversing a boundary of the VPC. The services, in some embodiments, include load balancing, firewall, quality of service (QoS) and may be stateful or stateless. Guest clusters are deployed within the VPC and use the centralized routing element of the VPC to access the gateway routing element of the AZ. The deployed GCs, in some embodiments, include distributed routing elements that (1) provide access to the centralized routing element of the VPC for components of the GC and (2) execute on host computers along with workload machines of the GC.
The centralized routing element, in some embodiments, includes a service router (or routing element) of the VPC network and a distributed router (or routing element) of the VPC network. The service router provides routing operations and a set of stateful services while the distributed router provides stateless routing and, in some embodiments, stateless services. In some embodiments, the centralized routing element includes a set of centralized routing elements each executing the service router of the VPC network and the distributed router of the VPC network. The service router of the VPC, in some embodiments, executes in each of the centralized routing elements in the set of centralized routing elements, but does not execute in other machines of the VPC. The distributed router of the VPC executes in each host computer that hosts a machine of the VPC.
The centralized routing elements in the set of centralized routing elements, in some embodiments, are configured in an active-standby mode, wherein a particular centralized routing element receives all the traffic traversing the set of centralized routing elements. In other embodiments, the centralized routing elements in the set of centralized routing elements are configured in an active-active mode in which each centralized routing element receives some traffic traversing the set of centralized routing elements.
Resources allocated to the VPC, in some embodiments, are inherited by the guest clusters such that the guest clusters use the resources allocated to the VPC. In some embodiments, the resources include processing resources, storage resources, and network resources (e.g., IP addresses assigned to the VPC, bandwidth allocated to the centralized routing element of the VPC, etc.). The GC, in some embodiments, also inherit (e.g., make use of) at least one service machine of the VPC that provides a service, or set of services, to the machines of the VPC and the machines of the GCs. In addition to inheriting the physical resources allocated to the VPC, in some embodiments, the guest clusters also inherit network policies and service definitions.
The GCs, in some embodiments, are implemented as Kubernetes clusters. In other embodiments, the GCs are non-Kubernetes clusters, while in yet other embodiments, the GC include both Kubernetes and non-Kubernetes clusters. The VPC, in some embodiments, is a Kubernetes cluster, while in other embodiments, the VPC is a non-Kubernetes cluster that includes at least one of a virtual machine and a non-Kubernetes Pod.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, the Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, the Detailed Description and the Drawings.
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments of the invention provide a novel network architecture for deploying guest clusters (GCs) including workload machines for a tenant (or other entity) within an availability zone (e.g., a datacenter or set of datacenters providing a set of hardware resources). The novel network architecture includes a virtual private cloud (VPC) deployed in the availability zone (AZ) that includes a centralized VPC gateway router that provides access to an AZ gateway router, or set of gateway routing elements, of the AZ. In some embodiments, the centralized VPC gateway router provides a set of services for packets traversing a boundary of the VPC. The services, in some embodiments, include load balancing, firewall, quality of service (QoS) and may be stateful or stateless. Guest clusters are deployed within the VPC and use the centralized VPC gateway router of the VPC to access the AZ gateway router. The deployed GCs, in some embodiments, include distributed routing elements that (1) provide access to the centralized VPC router for components of the GC and (2) execute on host computers along with workload machines of the GC.
In some embodiments, automated processes are performed to define the virtual private cloud (VPC) connecting a set of machines to a logical network that segregates the set of machines from other machines in the AZ. In some embodiments, the set of machines include virtual machines and container Pods, the VPC is defined with a supervisor cluster namespace, and the API requests are provided as YAML files. In some embodiments, the Pods (container Pods) are hosted in lightweight VMs that in turn execute on a host computer. In other embodiments, the host computers (e.g., worker/master nodes) are lightweight VMs deployed to host Pods of the cluster or other cluster components.
The automated processes in some embodiments use templates or preconfigured rules to identify and deploy network elements (e.g., forwarding elements) that implement the logical network without an administrator performing any action to direct the identification and deployment of the network elements after an API request is received. In some embodiments, the deployed network elements include a gateway router for the VPC (called VPC gateway router) to connect the VPC to a network of the AZ and/or to a network external to the datacenter set.
The VPC gateway router in some embodiments is implemented by one physical router. In other embodiments, the VPC gateway router is a logical gateway router that is implemented by more than one physical router. For instance, in some embodiments, the logical router is implemented with two physical routers in active/active or active/standby configurations. Also, in some embodiments, the logical router includes (1) a distributed router that is implemented by several router instances on host computers and edge appliances, and (2) a service router that is implemented by one or more service router instances executing on an edge appliance. In some embodiments, the service router is only implemented by the edge appliances and not on the other host computers of the VPC.
In some embodiments, the service router provides routing operations and a set of stateful services, while the distributed router provides stateless routing and, in some embodiments, stateless services. In some embodiments, the edge appliances implementing the service router are configured in active/active or active/standby configurations. Active/active configurations, in some embodiments, include configurations in which the edge appliances are in an active/standby configuration for each of multiple GCs within the VPC, but each physical router is assigned to be an active service router that executes a service router instance that is assigned to be the active service router for at least one GC of the multiple GCs within the VPC while being a standby for a set of other GCs in the VPC. Because the service router is only implemented on a set of edge appliances and, in some embodiments, only a single service router instance is active for a given GC, the VPC gateway router is sometimes referred to as a centralized VPC gateway router.
The VPC gateway router is configured to communicate with a datacenter gateway router to connect to external networks (e.g., other VPCs, or network accessible over the Internet). In some embodiments, the VPC gateway router is configured to perform source network address translation (SNAT) operation to translate internal network addresses used within the VPC to a set of one or more external source network addresses. In some embodiments, the VPC gateway router does not perform SNAT operations for traffic exchanged between the VPC and another VPC that is deployed in the AZ, while in other embodiments it performs such SNAT operations.
The VPC gateway is configured to perform load balancing operations, or to work with one or more load balancers to perform load balancing operations, on ingress and/or egress traffic entering and/or exiting the VPC. The load balancing operations in some embodiments are Layer 4 (L4) and/or Layer 7 (L7) load balancing operations. In some embodiments, at least a subset of the deployed machines is deployed through Kubernetes, and the L4/L7 load balancing operations implement the load balancing and ingress services of Kubernetes.
To deploy the network elements, the method of some embodiments uses one or more Custom Resource Definitions (CRDs) to define attributes of custom-specified network resources that are referred to by the received API requests. When these API requests are Kubernetes APIs, the CRDs define extensions to the Kubernetes networking requirements. To deploy the network elements, the network control system of some embodiments processes one or more CRDs that define attributes of custom-specified network resources that are referred to by the received API requests. When these API requests are Kubernetes API requests, the CRDs define extensions to the Kubernetes networking requirements. Some embodiments use the following CRDs: Virtual Network Interfaces (VIF) CRDs, Virtual Network CRDs, Endpoint Group CRDs, security CRDs, Virtual Service Object (VSO) CRDs, and Load Balancer CRD.
A VIF CRD in some embodiments is used to define a virtual interface to connect a non-Kubernetes container Pod or VM to software forwarding elements (e.g., software switches) executing on host computers on which the non-Kubernetes Pods and VMs execute. A Virtual Network CRD in some embodiments is used to define the attributes of a logical sub-network that is to connect a subset of the deployed machines. An Endpoint Group CRD is used to define attributes for grouping heterogeneous or homogeneous sets of machines (i.e., machines of the same or different types). Endpoint Group CRD provides a simple mechanism for defining a group of machines for accessing a service or compute operation, and/or for providing a service or compute operation.
Security CRDs are used to specify security policies for the VPC. For instance, some embodiments use Security Policy CRD to define security policies for traffic between VPC network endpoints, which can be defined with Endpoint Group CRDs. Another security CRD in some embodiments is an Admin Policy CRD, which can be used to define security policies for north/south traffic between the VPC and an external network (e.g., from another VPC, from an external IP block, or from outside of the datacenter set in which the VPC is deployed).
A VSO CRD is used to expose a service (e.g., a middlebox service or an application tier, such as Web server, AppServer, database server) provided inside of the VPC to machines outside of the VPC or to machines inside of the VPC. In some embodiments, an API that refers to a VSO CRD map a set of one or more L4 ports and a protocol to an endpoint group of machines for providing the service. Some embodiments use a Load Balancer CRD to define the configuration for a load balancer service. In some embodiments, the API that refers to the VSO CRD also uses the Load Balancer CRD to specify a load balancer service to use for distributing the traffic load among the endpoint group of machines.
Several more detailed examples of some embodiments will now be described. In these examples, several of the deployed logical networks are Kubernetes-based logical networks that define virtual private clouds (VPC) for corporate entities in one or more datacenters. In some embodiments, the VPC is a “supervisor” Kubernetes cluster with a namespace that provides the tenancy boundary for the entity. These embodiments use CRDs to define additional networking constructs and policies that complement the Kubernetes native resources.
In some embodiments, the APIs define a cluster of nodes (e.g., a Kubernetes worker node cluster) that includes a set of components that represent a control plane for the cluster and a set of (worker) nodes. In some embodiments, the nodes are host computers that host components of the Kubernetes clusters. The host computers of the cluster, in some embodiments, are physical machines, virtual machines, or a combination of both. The host computers (i.e., nodes) execute a set of Pods that, in some embodiments, include a set of containers. In some embodiments, a Kubernetes worker node executes an agent that ensures that containers are running within Pods (e.g., a kubelet), a container runtime that is responsible for running containers, and a network proxy (e.g., a kube-proxy). A cluster, in some embodiments, is partitioned into a set of namespaces into which different Pods or containers are deployed. A namespace is further partitioned into separate clusters, in some embodiments, as will be described below.
One of ordinary skill will realize that other embodiments define other types of networks for other types of entities, such as other business entities, non-profit organizations, educational entities, etc. In some of these other embodiments, neither Kubernetes nor Kubernetes-based Pods are used. For instance, some embodiments are used to deploy networks for only VMs and/or non-Kubernetes containers/Pods. Additional details of VPC and GC deployment using CRDs can be found in U.S. patent application Ser. No. 16/897,652 filed on Jun. 10, 2020, now published as U.S. Patent Publication 2021/0314239, which is hereby incorporated by reference.
As used in this document, data messages refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term data message is used in this document to refer to various formatted collections of bits that are sent across a network. The formatting of these bits can be specified by standardized protocols or non-standardized protocols. Examples of data messages following standardized protocols include Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. Also, as used in this document, references to L2, L3, L4, and L7 layers (or layer 2, layer 3, layer 4, and layer 7) are references respectively to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model.
Some embodiments configure the logical network for the VPC to connect the deployed set of machines to each other. For instance, in some embodiments, the logical network includes one or more logical forwarding elements, such as logical switches, routers, gateways, etc. In some embodiments, a logical forwarding element (LFE) is defined by configuring several physical forwarding elements (PFEs), some or all of which execute on host computers along with the deployed machines (e.g., VMs and Pods). The PFEs, in some embodiments, are configured to implement two or more LFEs to connect two or more different subsets of deployed machines.
In some embodiments, two or more sub-networks are configured for the logical networks. In some embodiments, each sub-network has one or more segments (with each segment implemented by a logical switch), connects a different subset of deployed machines, and provides a set of network elements that satisfy a unique set of connectivity requirements for that subset of machines. For instance, in some embodiments, a first sub-network (e.g., a first logical switch) connects the Kubernetes Pods, while a second sub-network (e.g., a second logical switch) connects VMs and/or non-Kubernetes Pods. Another example is having one sub-network for machines (e.g., VMs, Pods, etc.) that need high-bandwidth, and another sub-network for machines that can tolerate less bandwidth.
To deploy some or all of the unique sub-networks, some embodiments use CRDs to define the attributes of the sub-networks, so that these sub-networks can be referred to by the API requests. These CRDs are referred to, in some embodiments, as virtual network CRDs. An API that refers to a virtual-network CRD in some embodiments includes a network type value that can be used to define different types of virtual networks.
As shown, the set of guest clusters 105 includes several Kubernetes nodes (e.g., host computers that are part of the guest cluster) on which Pods (not shown) for the cluster execute. The set of nodes includes a set of master nodes 120 and a set of worker nodes 124. In some embodiments, the set of master nodes 120 includes a Kubernetes API server executing on each master node 120 to deploy Pods in the guest cluster. In this example, each guest cluster 105 includes a logical network (i.e., GC node segment 126) for connecting the Kubernetes nodes. In some embodiments, the logical network includes multiple network segments defined by a logical switch. The logical network of each guest cluster 105 connects to the logical VPC gateway router 140 that connects to the logical (or physical) gateway router 150 of the availability zone.
In some embodiments, the logical VPC gateway router 140 of the VPC 110 is similar to the gateway router 1282 of
The networks and machines (e.g., VMs, Pods, etc.) of the GC, in some embodiments, use NSX-T native networking. In such embodiments, Pods are placed on NSX-T segments in the GC network. NSX-T container network interfaces (CNIs) are used, in such embodiments, to connect the Pods to the NSX-T native network. In the NSX-T native network, the machines (e.g., Pods and VMs) of the GCs can reach each other through NSX-T distributed switching and routing and GC machines can reach the machines of the VPC network through the NSX-T distributed switching and routing and, in some embodiments, through the centralized routing element of the VPC. GC subnets, in some embodiments, are not exposed outside the VPC. In some embodiments, all traffic forwarding, networking, and security services are implemented by an NSX-T dataplane in hypervisors of host computers hosting machines of the VPC and GC. The Kubernetes network policy, in some embodiments, is implemented by the NSX-T distributed firewall.
In some embodiments, non-NSX-T CNIs are used to connect Pods over a virtual network implemented inside a set of worker nodes on which the service Pods (e.g., servers A1-n 328) execute, the virtual network (e.g., non-native Pod segment 332) will not be known to NSX-T.
The service iptables, or any other configured forwarding/load balancing component, is represented by load balancer 336. Load balancer 336, in some embodiments, is effectively a distributed load balancer that applies the same rules at each instance of the load balancer 336. In other embodiments, different load balancers 336 executing in different worker nodes 224 are programmed with different policies or rules for load balancing. A set of packet flows 370 destined for the load balanced Pods (e.g., servers A) are processed by a load balancer (e.g., a service node) 245 of the VPC which performs a first load balancing operation to produce subsets of the packet flows 371 that are directed to the individual worker nodes (e.g., using the IP address of the worker node on the GC node segment 226). Once the packets arrive at the worker nodes, the load balancer 336 (e.g., service iptables) performs a second load balancing operation to distribute the subset of packets received from the load balancer 245 among the individual service Pods 328 (e.g., as groups of packets 372) based on their network addresses on the non-native Pod segment 332 that are known to the worker nodes. A load balancing operation performed by one load balancer 336 is shown for clarity, however, one of ordinary skill in the art will appreciate that each load balancer performs a similar load balancing operation.
In some embodiments, a set of service nodes (e.g., service nodes 145 (e.g., VMs, appliances, containers, etc.)) are a resource shared by the VPC and the GCs within the VPC. In some embodiments, the service nodes are instances of virtual service objects (VSDs) that provide a set of services to the machines of the VPC and are inherited by GCs deployed in the VPC such that the machines of the GCs also receive the set of services from the service nodes 145. In some embodiments, the VSOs are associated with endpoint groups for which they provide a service. Different service nodes are deployed or assigned, in some embodiments, to provide a service or set of services for a particular GC within the VPC. Details of deploying a VSO can be found in U.S. patent application Ser. No. 16/897,652 filed on Jun. 10, 2020. In addition to inheriting the physical resources allocated to the VPC, in some embodiments, the guest clusters also inherit network policies and service definitions.
The VPC 110 also includes a cluster of master nodes 142, each of which is similar to the Kubernetes master node 1135 of
The centralized service routing component, in some embodiments, provides stateful services (e.g., firewall, load balancing, quality of service (QoS), etc.) and is implemented in an active/standby or active/active configuration that ensures that each data message belonging to a particular data message flow is always processed by a same centralized service routing component (service router) instance that stores the state for the particular data message flow. In some embodiments, the centralized service routing component connects to service machines (e.g., service nodes 145) that provide a stateful service and directs data messages that require the service (e.g., based on network policies specified for the VPC) to the service machines. The distributed routing component of the VPC, in some embodiments, performs a set of stateless routing operations. The set of stateless routing operations performed by the distributed routing component, in some embodiments, includes a distributed firewall operation that applies stateless firewall rules to data messages processed by the distributed routing element. The distributed routing element, in some embodiments, executes (is implemented) on each host computer that hosts a machine of the VPC namespace including any guest clusters within the VPC namespace. The firewall rules in some embodiments are defined by a security CRD as described above and in more detail in U.S. patent application Ser. No. 16/897,652.
After deploying the namespace, the process 400 receives (at 410) an instruction to deploy a guest cluster (e.g., guest cluster 105) within the VPC namespace (e.g., supervisor namespace 110). The instruction, in some embodiments, is received at a network manager cluster (e.g., SDN manager 962) from a network control system such as the one described below in relation to
After the instruction to deploy the guest cluster is received (at 410) the process 400 selects (at 415) resources of the VPC namespace to assign to the guest cluster. The resources assigned to the guest cluster, in some embodiments, include all or some of IP addresses, service machines, physical compute resources, network (e.g., bandwidth) resources, VPC gateway routing elements, etc. For example, in some embodiments, a particular centralized routing element is selected to be the active centralized routing element for a particular deployed guest cluster. Additionally, or alternatively, a particular set of load balancers or other service machines is selected, in some embodiments, to provide load balancing or other services to a particular deployed guest cluster. By selecting different centralized routing elements (e.g., VPC gateway routers) and sets of service machines for each guest cluster, the load from each guest cluster can be distributed among existing instances of the centralized routing elements and service machines without having to deploy a new centralized routing element and set of service machines each time a guest cluster is deployed.
In some embodiments, gateway routers 840 configured as active/active gateway routers exchange any of (1) state information related to stateful services provided at the gateway routers 840 or (2) information allowing a particular gateway router (e.g., 840b) that receives a packet to identify the gateway router that maintains the state information needed to process the packet. For example, in some embodiments, a consistent hash of header values that are constant for the life of a packet flow are used to identify a (backup) gateway router that stores state information. In other embodiments, stateful services provided by a same service node called by each gateway router 840 for a particular guest cluster maintains the state information and the gateway routers do not have to account for the location of the state information. In some embodiments, each VPC gateway router 840 (or set of gateway routers) connects to a same set of service machines, while in other embodiments, each VPC gateway router connects to a different set of service machines. The set of service machines for each VPC gateway router, in some embodiments, is based on the services required for guest clusters for which the VPC gateway router have been selected.
In addition to selecting (at 415) resources of the VPC namespace to assign to the guest cluster, the process 400 updates (at 420) policies (e.g., security and network policies) of the VPC namespace based on the addition of the guest cluster. In some embodiments, updating the policies includes adding policies defined for the guest cluster to existing policies of the VPC namespace. For example, based on a set of service pods implemented in the guest cluster and assigned a virtual IP (VIP) address (e.g., by selecting an available VIP of the VPC namespace in operation 415), a network policy requiring load balancing for data messages destined to the VIP associated with the set of service pods is added to the set of existing network policies. In addition to updating a network policy, a firewall based on a security policy may need to be updated based on the addition of the guest cluster. For example, a firewall policy that generates firewall rules for each machine in the VPC based on a source and/or destination address of a data message updates the set of firewall rules with firewall rules for the addresses of the machines in the added guest cluster. If a firewall rule specifies a group of machines, some embodiments add the machines of the guest cluster to the group definition (e.g., either a machine identifier or a VIF of the machine at which the rule should be applied). For north-south firewall rules, new rules are added, in some embodiments, based on an external IP address used by the guest cluster (e.g., based on a source network address translation operation at the edge of the guest cluster or at the centralized routing element of the VPC).
Finally, the components of the VPC and the guest cluster(s) within the VPC namespace are configured (at 425) to apply the updated policies. In some embodiments, configuring the VPC components includes updating a rule set or group definition as described above. Configuring the guest clusters, in some embodiments, includes identifying the host computers hosting machines of the guest cluster and updating an existing distributed routing component instance to apply the updated rules and implement the network segments of the added guest cluster. Alternatively, in some embodiments, or for host computers that previously did not host components of the VPC, configuring components of the VPC to apply the updated policies includes configuring a forwarding element of a host computer on which a machine of the guest cluster executes to implement the network segments to which the guest cluster machines connect as well as the distributed routing component which applies a set of updated distributed firewall rules. Additional details of deploying VPC namespaces and guest clusters are discussed below.
In some embodiments, the supervisor cluster (VPC) resources (e.g., network and Kubernetes services, Pods, VMs, worker nodes, etc.) are accessible by the guest cluster machines (e.g., VMs and Pods). This is because the IP addresses of the VPC machines are reachable from the machines of the guest clusters. In some embodiments, the guest cluster network is opaque to the supervisor cluster (VPC) such that the VPC machines cannot address the machines in the GC networks.
VPC 910 also includes multiple network segments (e.g., logical switches) 947 and 946 that may be scaled out (e.g., by an auto-scaling operation performed by an NCP of a master node 942) based on the availability of addresses in the network segment. In some embodiments, multiple different segments are deployed to logically separate machines (Pods, VMs, etc.) with different functions or that belong to different entities of a tenant for which the VPC 910 is deployed. Each network segment of the VPC 910 is logically connected to logical gateway router 940. The master node 942, in some embodiments, is connected to a management network to communicate with the compute manager/controller 966 to deploy machines and to communicate with the SDN manager 962 to identify machines in the VPC 910 (or guest cluster 905) network that need to be connected to the SDN network (e.g., an NSX-T network). The SDN manager 962 can communicate with the SDN controller 964 as described in more detail below in regard to
Each guest cluster 905 includes at least one network segment that connects to the logical gateway router 940. As for the VPC network segments 946 and 947, the network segments of the guest cluster may be scaled out (e.g., by an auto-scaling operation performed by an NCP of a master node 942) based on the availability of addresses in the network segment. In some embodiments, multiple different segments are deployed to logically separate machines (Pods, VMs, etc.) with different functions or that belong to different entities of a tenant for which the guest cluster 905 is deployed.
As shown, the control system 1100 includes an API processing cluster 1105, a software defined network (SDN) manager cluster 1110, an SDN controller cluster 1115, and compute managers and controllers 1117. The API processing cluster 1105 includes two or more API processing nodes 1135, with each node comprising an API processing server 1140 and a network controller plugin (NCP) 1145. The API processing server receives intent-based API calls and parses these calls. In some embodiments, the received API calls are in a declarative, hierarchical Kubernetes format, and may contain multiple different requests.
The API processing server 1140 parses each received intent-based API request into one or more individual requests. When the requests relate to the deployment of machines, the API server provides these requests directly to compute managers and controllers 1117, or indirectly provide these requests to the compute managers and controllers 1117 through an agent running on the Kubernetes master node 1135. The compute managers and controllers 1117 then deploy VMs and/or Pods on host computers in the availability zone.
The API calls can also include requests that require network elements to be deployed. In some embodiments, these requests explicitly identify the network elements to deploy, while in other embodiments the requests can also implicitly identify these network elements by requesting the deployment of compute constructs (e.g., compute clusters, containers, etc.) for which network elements have to be defined by default. As further described below, the control system 1100 uses the NCP 1145 to identify the network elements that need to be deployed, and to direct the deployment of these network elements.
In some embodiments, the API calls refer to extended resources that are not defined per se by Kubernetes. For these references, the API processing server 1140 uses one or more CRDs 1120 to interpret the references in the API calls to the extended resources. As mentioned above, the CRDs in some embodiments include the VIF, Virtual Network, Endpoint Group, Security Policy, Admin Policy, and Load Balancer and VSO CRDs. In some embodiments, the CRDs are provided to the API processing server in one stream with the API calls.
NCP 1145 is the interface between the API server 1140 and the SDN manager cluster 1110 that manages the network elements that serve as the forwarding elements (e.g., switches, routers, bridges, etc.) and service elements (e.g., firewalls, load balancers, etc.) in an availability zone. The SDN manager cluster 1110 directs the SDN controller cluster 1115 to configure the network elements to implement the desired forwarding elements and/or service elements (e.g., logical forwarding elements and logical service elements) of one or more logical networks. As further described below, the SDN controller cluster interacts with local controllers on host computers and edge gateways to configure the network elements in some embodiments.
In some embodiments, NCP 1145 registers for event notifications with the API server 1140, e.g., sets up a long-pull session with the API server to receive all CRUD (Create, Read, Update and Delete) events for various CRDs that are defined for networking. In some embodiments, the API server 1140 is a Kubernetes master node, and the NCP 1145 runs in this node as a Pod. NCP 1145 in some embodiments collects realization data from the SDN resources for the CRDs and provide this realization data as it relates to the CRD status.
In some embodiments, NCP 1145 processes the parsed API requests relating to VIFs, virtual networks, load balancers, endpoint groups, security policies, and VSOs, to direct the SDN manager cluster 1110 to implement (1) the VIFs needed to connect VMs and Pods to forwarding elements on host computers, (2) virtual networks to implement different segments of a logical network of the VPC (or of GCs within the VPC), (3) load balancers to distribute the traffic load to endpoint machines, (4) firewalls to implement security and admin policies, and (5) exposed ports to access services provided by a set of machines in the VPC to machines outside and inside of the VPC.
The API server provides the CRDs that have been defined for these extended network constructs to the NCP for it to process the APIs that refer to the corresponding network constructs. The API server also provides configuration data from the configuration storage 1125 to the NCP 1145. The configuration data in some embodiments include parameters that adjust the pre-defined template rules that the NCP follows to perform its automated processes. The NCP performs these automated processes to execute the received API requests in order to direct the SDN manager cluster 1110 to deploy the network elements for the VPC. For a received API, the control system 1100 performs one or more automated processes to identify and deploy one or more network elements that are used to implement the logical network for a VPC. The control system performs these automated processes without an administrator performing any action to direct the identification and deployment of the network elements after an API request is received.
The SDN managers 1110 and controllers 1115 can be any SDN managers and controllers available today. In some embodiments, these managers and controllers are the NSX-T managers and controllers licensed by VMware Inc. In such embodiments, NCP 1145 detects network events by processing the data supplied by its corresponding API server 1140, and uses NSX-T APIs to direct the NSX-T manager 1110 to deploy and/or modify NSX-T network constructs needed to implement the network state expressed by the API calls. The communication between the NCP and NSX-T manager 1110 is asynchronous communication, in which NCP provides the desired state to NSX-T managers, which then relay the desired state to the NSX-T controllers to compute and disseminate the state asynchronously to the host computer, forwarding elements and service nodes in the availability zone (i.e., to the SDDC set controlled by the controllers 1115).
After receiving the APIs from the NCPs 1145, the SDN managers 1110 in some embodiments direct the SDN controllers 1115 to configure the network elements to implement the network state expressed by the API calls. In some embodiments, the SDN controllers serve as the central control plane (CCP) of the control system 1100.
Based on the received configuration data, the LCP agents 1220 on the host computers 1205 configure one or more software switches 1250 and software routers 1255 to implement distributed logical switches, routers, bridges and/or service nodes (e.g., service VMs or hypervisor service engines) of one or more logical networks with the corresponding switches and routers on other host computers 1205, edge appliances 1210, and TOR switches 1215. On the edge appliances, the LCP agents 1225 configure packet processing stages 1270 of these appliances to implement the logical switches, routers, bridges and/or service nodes of one or more logical networks along with the corresponding switches and routers on other host computers 1205, edge appliances 1210, and TOR switches 1215.
For the TORs 1215, the TOR agents 1230 configure one or more configuration tables 1275 of TOR switches 1215 through an OVSdb server 1240. The data in the configuration tables then is used to configure the hardware ASIC packet-processing pipelines 1280 to perform the desired forwarding operations to implement the desired logical switching, routing, bridging and service operations. U.S. Pat. Nos. 10,554,484, 10,250,553, 9,847,938, and 9,178,833 describe CCPs, LCPs and TOR agents in more detail, and are incorporated herein by reference.
After the host computers 1205 are configured along with the edge appliances 1210 and/or TOR switches 1215, they can implement one or more logical networks, with each logical network segregating the machines and network traffic of the entity for which it is deployed from the machines and network traffic of other entities in the same availability zone.
As shown, the logical network 1295 includes multiple logical switches 1284 with each logical switch connecting different sets of machines and serving as a different network segment. In some embodiments, the different logical switches belong to different guest clusters. Each logical switch has a port 1252 that connects with (i.e., is associated with) a virtual interface 1265 of a machine 1260. The machines 1260 in some embodiments include VMs and Pods, with each Pod having one or more containers.
The logical network 1295 also includes a logical router 1282 that connects the different network segments defined by the different logical switches 1284. In some embodiments, the logical router 1282 serves as a gateway for the deployed VPC in
In some embodiments, the centralized and distributed routing components connect through a logical switch 1294 defined on the host computers 1205 and the edge appliances 1210. Also, in some embodiments, the logical router is implemented by a pair of logical nodes 1299, with each node having centralized and distributed components. The pair of nodes can be configured to perform in active/active or active/standby modes in some embodiments. U.S. Pat. No. 9,787,605 describes the gateway implementation of some embodiments in more detail and are incorporated herein by reference.
As shown, the process 1300 initially allocates (at 1305) an IP subnet for the VPC. In some embodiments, the VPC is part of a supervisor cluster (or namespace) that is a single routing domain with a corresponding IP CIDR (Classless Inter-Domain Routing) that specifies a range of IP addresses internal to the availability zone. The allocated IP subnet in some embodiments is a subnet from this IP CIDR. In conjunction with the allocated IP addresses, the process in some embodiments allocates MAC addresses for virtual interfaces of the VPC. In some embodiments, the VPC is a virtual hybrid cloud (VHC) implemented in a single namespace in the supervisor cluster.
Next, at 1310, the process defines a gateway router for the VPC, and associates this gateway router with one or more of the allocated internal IP addresses. These associated addresses are addresses used by VPC switches and routers to reach the gateway.
In some embodiments, the VPC gateway router 1282 is configured to connect the VPC with one or more gateway routers 1405 of the availability zone (i.e., of the SDDC set that contains the VPC), in order to connect to a network external to the availability zone. Also, in some embodiments, the VPC gateway router 1282 is configured to communicate with a datacenter gateway router 1405 to connect the VPC gateway 1282 to another VPC gateway of another VPC in order to connect the two VPCs to each other. In some embodiments, the VPC gateway router 1282 is configured to forward packets directly to the gateway routers (not shown) of the other VPCs. In some embodiments, the VPC gateway router 1282 is traversed for cross-namespace traffic and firewall rules (including admin policies and Kubernetes network policies on the namespace) are applied to the cross-namespace traffic. However, since Kubernetes expects a single routing domain for the whole cluster (supervisor namespace, or VPC), SNAT will not be applied to cross-namespace traffic, but only to the traffic to the external network.
At 1315, the process defines a segment of a logical network that it defines for the VPC and allocates a range of IP addresses to this segment. In some embodiments, this allocated range is a contiguous range, while in other embodiments it is not (i.e., the allocated IP addresses in these embodiments are not necessarily sequential). In some embodiments, the defined logical network segment includes a logical switch that is defined to connect a particular set of machines (e.g., VMs and/or Pods).
As mentioned above, the VPC logical network in some embodiments includes one or more logical forwarding elements, such as logical switches, routers, gateways, etc. In some embodiments, the SDN controller 1115 implements the logical network by configuring several physical forwarding elements (such as software and hardware switches, routers, bridges, etc.) on host computers, edge appliances, and TOR switches to implement one or more logical forwarding elements (LFEs).
As further described below, the control system in some embodiments configures the PFEs to implement two or more LFEs to connect two or more different subsets of deployed machines that are in two or more sub-networks of the logical networks. In some embodiments, each sub-network can have one or more segments (with each segment implemented by a logical switch), connects a different subset of deployed machines, and provides a set of network elements that satisfy a unique set of connectivity requirements for that subset of machines. For instance, in some embodiments, a first sub-network (e.g., a first logical switch) connects the Kubernetes Pods, while a second sub-network (e.g., a second logical switch) connects VMs. In other embodiments, one sub-network is for VMs needing high-bandwidth, while another sub-network is for regular VMs. Additional examples are provided in U.S. patent application Ser. No. 16/897,652 filed on Jun. 10, 2020.
Some sub-networks of a VPC's logical network in some embodiments can have their own sub-network gateway router. If the sub-network for the segment defined at 1315 has such a sub-network router, the process 1300 defines (at 1320) the sub-network router for the logical network segment. As further described below, the sub-network routers in some embodiments can be configured to forward packets to the VPC gateway router (e.g., router 1282) or the availability-zone router (e.g., router 1405).
At 1325, the process 1300 configures the VPC gateway to connect to the availability-zone gateway and to perform source network address translation (SNAT) operations. For instance, in some embodiments, the process configures the VPC gateway 1282 with forwarding rules for the gateway to use to forward certain data message flows to the availability-zone gateway 1405. Also, in some embodiments, the VPC gateway router 1282 is configured to perform SNAT operations to translate internal network addresses used within the VPC to a set of one or more external source network addresses, and to perform the reverse SNAT operations. The external source network addresses in some embodiments are addresses within the availability zone. In some embodiments, the VPC gateway router 1282 does not perform SNAT operations for traffic exchanged between its VPC and another VPC that is deployed in the same availability zone, while in other embodiments, it performs such SNAT operations for some or all of the other VPCs.
In some embodiments, the VPC gateway 1282 is configured to perform other service operations or to use service engines/appliances to perform such other service operations. For such embodiments, the process 1300 configures (at 1330) the VPC gateway to perform other service operations (e.g., load balancing operations, firewall operations, etc.) or to forward data messages to service engines/appliances to perform such other service operations. In some embodiments, the VPC gateway is configured to perform service operations and/or forward data messages to service engines/appliances to perform such service operations, but this configuration, in some embodiments, is not part of the process 1300 when the VPC gateway is deployed and instead is part of another process that is performed subsequently (e.g., upon deployment of machines in the VPC that perform certain services or applications).
In
Resources allocated to the VPC, in some embodiments, are inherited by the guest clusters such that the guest clusters use the resources allocated to the VPC. In some embodiments, the resources include processing resources, storage resources, and network resources (e.g., IP addresses assigned to the VPC, bandwidth allocated to the centralized routing element of the VPC, etc.). Sharing resources, in some embodiments, allows for more efficient use of allocated resources of the VPC and the GCs within the VPC by avoiding overallocation of resources to the individual GCs or the VPC. Resources can be allocated based on an average utilization of the set of VPC and GC resources where the variability of the resource needs are reduced based on the greater number of clusters such that the total load is more likely to be within a smaller range of the average and, accordingly, a smaller percentage of overallocation is expected to provide sufficient resources for most situations. Additionally, the automated deployment described herein and in U.S. patent application Ser. No. 16/897,652 simplifies the work of a system administrator that does not need to allocate resources to each workload machine or guest cluster separately.
As further described in U.S. patent application Ser. No. 16/897,652, some embodiments define each member of an endpoint group in terms of a port address as well as an IP address. In such embodiments, the endpoint group's associated IP and port addresses can be used to define source and/or destination IP and port values of service rules (e.g., firewall rules or other middlebox service rules) that are processed by middlebox service engines to perform middlebox service operations. As new guest clusters are added to a VPC, some embodiments add guest cluster machines as members of the endpoint groups (e.g., add the IP addresses of the GC machines to the endpoint group definition) based on the security or network policies defined for the VPC, the guest cluster, or both the VPC and the guest cluster.
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 1605 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 1600. For instance, the bus 1605 communicatively connects the processing unit(s) 1610 with the read-only memory 1630, the system memory 1625, and the permanent storage device 1635.
From these various memory units, the processing unit(s) 1610 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 1630 stores static data and instructions that are needed by the processing unit(s) 1610 and other modules of the computer system. The permanent storage device 1635, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 1600 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1635.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 1635, the system memory 1625 is a read-and-write memory device. However, unlike storage device 1635, the system memory is a volatile read-and-write memory, such as random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1625, the permanent storage device 1635, and/or the read-only memory 1630. From these various memory units, the processing unit(s) 1610 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 1605 also connects to the input and output devices 1640 and 1645. The input devices enable the user to communicate information and select requests to the computer system. The input devices 1640 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1645 display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD−RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. Several embodiments were described above that use certain CRDs. One of ordinary skill will realize that other embodiments use other types of CRDs. For instance, some embodiments use LB monitor CRD so that load balancing monitors can be created through APIs that refer to such a CRD. LB monitors in some embodiments provide statistics to reflect the usage and overall health of the load balancers. Also, while several examples above refer to container Pods, other embodiments use containers outside of Pods. Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.
Number | Name | Date | Kind |
---|---|---|---|
6697360 | Gai et al. | Feb 2004 | B1 |
6765914 | Jain et al. | Jul 2004 | B1 |
7869439 | Ramberg et al. | Jan 2011 | B1 |
7890543 | Hunt et al. | Feb 2011 | B2 |
7912955 | Machiraju et al. | Mar 2011 | B1 |
8627442 | Ji et al. | Jan 2014 | B2 |
8683560 | Brooker et al. | Mar 2014 | B1 |
9152803 | Biswas et al. | Oct 2015 | B2 |
9225638 | Jain et al. | Dec 2015 | B2 |
9258312 | O'Neill et al. | Feb 2016 | B1 |
9531590 | Jain et al. | Dec 2016 | B2 |
9536077 | Bignon et al. | Jan 2017 | B2 |
9590901 | Tubaltsev et al. | Mar 2017 | B2 |
9594546 | Todd et al. | Mar 2017 | B1 |
9674275 | Engers et al. | Jun 2017 | B1 |
9755898 | Jain et al. | Sep 2017 | B2 |
9774537 | Jain et al. | Sep 2017 | B2 |
9813509 | Visser et al. | Nov 2017 | B1 |
9825810 | Jain et al. | Nov 2017 | B2 |
9935827 | Jain et al. | Apr 2018 | B2 |
9979641 | Jain et al. | May 2018 | B2 |
10095669 | Karppanen | Oct 2018 | B1 |
10122735 | Wohlgemuth | Nov 2018 | B1 |
10129077 | Jain et al. | Nov 2018 | B2 |
10135737 | Jain et al. | Nov 2018 | B2 |
10193977 | Ke et al. | Jan 2019 | B2 |
10205701 | Voss et al. | Feb 2019 | B1 |
10225137 | Jain et al. | Mar 2019 | B2 |
10257095 | Jain et al. | Apr 2019 | B2 |
10270796 | Veeraswamy et al. | Apr 2019 | B1 |
10320679 | Jain et al. | Jun 2019 | B2 |
10341233 | Jain et al. | Jul 2019 | B2 |
10496605 | Melnik et al. | Dec 2019 | B2 |
10516568 | Jain et al. | Dec 2019 | B2 |
10547521 | Roy et al. | Jan 2020 | B1 |
10594743 | Hong et al. | Mar 2020 | B2 |
10609091 | Hong et al. | Mar 2020 | B2 |
10613888 | Mentz et al. | Apr 2020 | B1 |
10628144 | Myneni et al. | Apr 2020 | B2 |
10652143 | Ravinoothala et al. | May 2020 | B2 |
10693782 | Jain et al. | Jun 2020 | B2 |
10708368 | Young et al. | Jul 2020 | B1 |
10725836 | Savenkov et al. | Jul 2020 | B2 |
10795909 | Bond et al. | Oct 2020 | B1 |
10812337 | Vaidya et al. | Oct 2020 | B2 |
10841226 | Mariappan | Nov 2020 | B2 |
10942788 | Palavalli et al. | Mar 2021 | B2 |
10944691 | Raut et al. | Mar 2021 | B1 |
10951661 | Medan et al. | Mar 2021 | B1 |
10972341 | Mudigonda | Apr 2021 | B2 |
10972386 | Mackie et al. | Apr 2021 | B2 |
11074091 | Nayakbomman et al. | Jul 2021 | B1 |
11086700 | Myneni et al. | Aug 2021 | B2 |
11159366 | Gawade et al. | Oct 2021 | B1 |
11190491 | Kaciulis et al. | Nov 2021 | B1 |
11194483 | Dontu et al. | Dec 2021 | B1 |
11277309 | Vaidya et al. | Mar 2022 | B2 |
11316822 | Gawade | Apr 2022 | B1 |
11436057 | Shen et al. | Sep 2022 | B2 |
11500688 | Liu et al. | Nov 2022 | B2 |
11570146 | Liu et al. | Jan 2023 | B2 |
11606254 | Liu et al. | Mar 2023 | B2 |
11671401 | Singh et al. | Jun 2023 | B2 |
20040098154 | McCarthy | May 2004 | A1 |
20050129019 | Cheriton | Jun 2005 | A1 |
20070244962 | Laadan et al. | Oct 2007 | A1 |
20070245334 | Nieh et al. | Oct 2007 | A1 |
20100149996 | Sun | Jun 2010 | A1 |
20100177674 | Aggarwal | Jul 2010 | A1 |
20100211815 | Mankovskii et al. | Aug 2010 | A1 |
20100246545 | Berzin | Sep 2010 | A1 |
20100293378 | Xiao et al. | Nov 2010 | A1 |
20110161988 | Kashyap | Jun 2011 | A1 |
20110194494 | Aso et al. | Aug 2011 | A1 |
20110282936 | Chekhanovskiy et al. | Nov 2011 | A1 |
20110289508 | Fell | Nov 2011 | A1 |
20120117226 | Tanaka et al. | May 2012 | A1 |
20120150912 | Ripberger | Jun 2012 | A1 |
20120304275 | Ji et al. | Nov 2012 | A1 |
20130018994 | Flavel et al. | Jan 2013 | A1 |
20130019314 | Ji et al. | Jan 2013 | A1 |
20130125230 | Koponen et al. | May 2013 | A1 |
20130174168 | Abuelsaad et al. | Jul 2013 | A1 |
20130266019 | Qu et al. | Oct 2013 | A1 |
20130283339 | Biswas et al. | Oct 2013 | A1 |
20140036730 | Nellikar et al. | Feb 2014 | A1 |
20140129690 | Jaisinghani et al. | May 2014 | A1 |
20140164897 | Yucel et al. | Jun 2014 | A1 |
20140223556 | Bignon et al. | Aug 2014 | A1 |
20140237100 | Cohn et al. | Aug 2014 | A1 |
20140258479 | Tenginakai et al. | Sep 2014 | A1 |
20150063166 | Sif et al. | Mar 2015 | A1 |
20150081767 | Evens | Mar 2015 | A1 |
20150100704 | Davie et al. | Apr 2015 | A1 |
20150172093 | Kaneko et al. | Jun 2015 | A1 |
20150222598 | Koponen et al. | Aug 2015 | A1 |
20150249574 | Zhang | Sep 2015 | A1 |
20150263899 | Tubaltsev et al. | Sep 2015 | A1 |
20150263946 | Tubaltsev et al. | Sep 2015 | A1 |
20150317169 | Sinha et al. | Nov 2015 | A1 |
20150348044 | Smith | Dec 2015 | A1 |
20150379281 | Feroz et al. | Dec 2015 | A1 |
20160036860 | Xing et al. | Feb 2016 | A1 |
20160080422 | Belgodere et al. | Mar 2016 | A1 |
20160094454 | Jain et al. | Mar 2016 | A1 |
20160094457 | Jain et al. | Mar 2016 | A1 |
20160094650 | Rio | Mar 2016 | A1 |
20160094661 | Jain et al. | Mar 2016 | A1 |
20160182293 | Benedetto et al. | Jun 2016 | A1 |
20160217301 | Watanabe et al. | Jul 2016 | A1 |
20160239326 | Kaplan et al. | Aug 2016 | A1 |
20160241436 | Fourie et al. | Aug 2016 | A1 |
20160254964 | Benc | Sep 2016 | A1 |
20160269318 | Su et al. | Sep 2016 | A1 |
20160294612 | Ravinoothala et al. | Oct 2016 | A1 |
20160315809 | McMurry et al. | Oct 2016 | A1 |
20160335129 | Behera et al. | Nov 2016 | A1 |
20160337334 | Murr | Nov 2016 | A1 |
20170005923 | Babakian | Jan 2017 | A1 |
20170005986 | Bansal et al. | Jan 2017 | A1 |
20170031956 | Burk et al. | Feb 2017 | A1 |
20170063632 | Goliya et al. | Mar 2017 | A1 |
20170063782 | Jain et al. | Mar 2017 | A1 |
20170085561 | Han et al. | Mar 2017 | A1 |
20170093790 | Banerjee et al. | Mar 2017 | A1 |
20170171144 | Sagiraju et al. | Jun 2017 | A1 |
20170177394 | Barzik et al. | Jun 2017 | A1 |
20170195210 | Jacob et al. | Jul 2017 | A1 |
20170206034 | Fetik | Jul 2017 | A1 |
20170207963 | Mehta et al. | Jul 2017 | A1 |
20170286698 | Shetty et al. | Oct 2017 | A1 |
20170317954 | Masurekar et al. | Nov 2017 | A1 |
20170332307 | Pan | Nov 2017 | A1 |
20170353351 | Cheng et al. | Dec 2017 | A1 |
20170366416 | Beecham et al. | Dec 2017 | A1 |
20170374106 | Hamou et al. | Dec 2017 | A1 |
20180063194 | Vaidya | Mar 2018 | A1 |
20180083835 | Cole et al. | Mar 2018 | A1 |
20180089299 | Collins et al. | Mar 2018 | A1 |
20180114012 | Sood et al. | Apr 2018 | A1 |
20180123943 | Lee et al. | May 2018 | A1 |
20180131675 | Sengupta et al. | May 2018 | A1 |
20180167453 | Luo | Jun 2018 | A1 |
20180167458 | Ould-Brahim et al. | Jun 2018 | A1 |
20180167487 | Vyas et al. | Jun 2018 | A1 |
20180183757 | Gunda et al. | Jun 2018 | A1 |
20180205605 | Mittal et al. | Jul 2018 | A1 |
20180234459 | Kung | Aug 2018 | A1 |
20180248827 | Scharber et al. | Aug 2018 | A1 |
20180262424 | Roeland et al. | Sep 2018 | A1 |
20180287996 | Tripathy et al. | Oct 2018 | A1 |
20180295036 | Krishnamurthy et al. | Oct 2018 | A1 |
20180331885 | Raymond et al. | Nov 2018 | A1 |
20180359323 | Madden | Dec 2018 | A1 |
20190034237 | Siddappa et al. | Jan 2019 | A1 |
20190036868 | Chandrashekhar et al. | Jan 2019 | A1 |
20190042518 | Marolia et al. | Feb 2019 | A1 |
20190068544 | Hao et al. | Feb 2019 | A1 |
20190079751 | Foskett et al. | Mar 2019 | A1 |
20190097879 | Cai et al. | Mar 2019 | A1 |
20190102280 | Caldato et al. | Apr 2019 | A1 |
20190103992 | Cidon et al. | Apr 2019 | A1 |
20190132220 | Boutros et al. | May 2019 | A1 |
20190132221 | Boutros et al. | May 2019 | A1 |
20190132283 | Ballard et al. | May 2019 | A1 |
20190140895 | Ennis, Jr. et al. | May 2019 | A1 |
20190140921 | Xu et al. | May 2019 | A1 |
20190149512 | Sevinc et al. | May 2019 | A1 |
20190149516 | Rajahalme et al. | May 2019 | A1 |
20190149518 | Sevinc et al. | May 2019 | A1 |
20190171650 | Botev et al. | Jun 2019 | A1 |
20190173780 | Hira | Jun 2019 | A1 |
20190229987 | Shelke et al. | Jul 2019 | A1 |
20190238363 | Boutros et al. | Aug 2019 | A1 |
20190238364 | Boutros et al. | Aug 2019 | A1 |
20190245757 | Meyer et al. | Aug 2019 | A1 |
20190273683 | Jiang et al. | Sep 2019 | A1 |
20190288947 | Jain et al. | Sep 2019 | A1 |
20190306036 | Boutros et al. | Oct 2019 | A1 |
20190306086 | Boutros et al. | Oct 2019 | A1 |
20190356693 | Cahana et al. | Nov 2019 | A1 |
20190384645 | Palavalli et al. | Dec 2019 | A1 |
20190386877 | Vaidya et al. | Dec 2019 | A1 |
20200065080 | Myneni et al. | Feb 2020 | A1 |
20200065166 | Myneni et al. | Feb 2020 | A1 |
20200073692 | Rao | Mar 2020 | A1 |
20200076684 | Naveen et al. | Mar 2020 | A1 |
20200076685 | Vaidya et al. | Mar 2020 | A1 |
20200076734 | Naveen et al. | Mar 2020 | A1 |
20200084112 | Kandaswamy et al. | Mar 2020 | A1 |
20200092275 | Seed et al. | Mar 2020 | A1 |
20200112504 | Osman | Apr 2020 | A1 |
20200213366 | Hong et al. | Jul 2020 | A1 |
20200250009 | Jaeger et al. | Aug 2020 | A1 |
20200250074 | Zhang et al. | Aug 2020 | A1 |
20200252376 | Feng et al. | Aug 2020 | A1 |
20200301801 | Hegde | Sep 2020 | A1 |
20200314006 | Mackie et al. | Oct 2020 | A1 |
20200314173 | Pahwa | Oct 2020 | A1 |
20200344120 | Pianigiani et al. | Oct 2020 | A1 |
20200366558 | Vaidya et al. | Nov 2020 | A1 |
20200374186 | Scott | Nov 2020 | A1 |
20200379812 | Ranjan et al. | Dec 2020 | A1 |
20200382556 | Woolward et al. | Dec 2020 | A1 |
20200401457 | Singhal et al. | Dec 2020 | A1 |
20200403853 | Garipally et al. | Dec 2020 | A1 |
20200403860 | Lewis et al. | Dec 2020 | A1 |
20200409671 | Mazurskiy | Dec 2020 | A1 |
20210004292 | Zlotnick et al. | Jan 2021 | A1 |
20210064442 | Alluboyina et al. | Mar 2021 | A1 |
20210099335 | Li | Apr 2021 | A1 |
20210165695 | Palavalli et al. | Jun 2021 | A1 |
20210200814 | Tal et al. | Jul 2021 | A1 |
20210243164 | Murray et al. | Aug 2021 | A1 |
20210273946 | Iqbal et al. | Sep 2021 | A1 |
20210306285 | Hirasawa et al. | Sep 2021 | A1 |
20210311803 | Zhou et al. | Oct 2021 | A1 |
20210314190 | Liu et al. | Oct 2021 | A1 |
20210314239 | Shen et al. | Oct 2021 | A1 |
20210314240 | Liu et al. | Oct 2021 | A1 |
20210314300 | Shen et al. | Oct 2021 | A1 |
20210314361 | Zhou et al. | Oct 2021 | A1 |
20210314388 | Zhou | Oct 2021 | A1 |
20210328858 | Asveren et al. | Oct 2021 | A1 |
20210349765 | Zhou et al. | Nov 2021 | A1 |
20210352044 | Asveren et al. | Nov 2021 | A1 |
20210365308 | Myneni et al. | Nov 2021 | A1 |
20210397466 | McKee et al. | Dec 2021 | A1 |
20210409336 | Talur | Dec 2021 | A1 |
20220012045 | Rudraraju et al. | Jan 2022 | A1 |
20220035651 | Maurya et al. | Feb 2022 | A1 |
20220070250 | Baid et al. | Mar 2022 | A1 |
20220158926 | Wennerström et al. | May 2022 | A1 |
20220182439 | Zhou et al. | Jun 2022 | A1 |
20220200865 | Vaidya et al. | Jun 2022 | A1 |
20220210113 | Pillareddy et al. | Jun 2022 | A1 |
20220278926 | Sharma et al. | Sep 2022 | A1 |
20220303246 | Miriyala et al. | Sep 2022 | A1 |
20220311738 | Singh et al. | Sep 2022 | A1 |
20220321495 | Liu et al. | Oct 2022 | A1 |
20220400053 | Liu et al. | Dec 2022 | A1 |
20230070224 | Huo et al. | Mar 2023 | A1 |
20230104568 | Miriyala et al. | Apr 2023 | A1 |
20230179573 | Sosnovich et al. | Jun 2023 | A1 |
Number | Date | Country |
---|---|---|
2004227600 | May 2009 | AU |
3107455 | Feb 2020 | CA |
106789367 | May 2017 | CN |
107947961 | Apr 2018 | CN |
108809722 | Nov 2018 | CN |
110531987 | Dec 2019 | CN |
110611588 | Dec 2019 | CN |
111327640 | Jun 2020 | CN |
111371627 | Jul 2020 | CN |
111865643 | Oct 2020 | CN |
113141386 | Jul 2021 | CN |
2464151 | Jun 2012 | EP |
2464152 | Jun 2012 | EP |
2830270 | Jan 2015 | EP |
3316532 | May 2018 | EP |
3617879 | Mar 2020 | EP |
2011070707 | Apr 2011 | JP |
2012099048 | May 2012 | JP |
2015115043 | Jun 2015 | JP |
2018523932 | Aug 2018 | JP |
2011159842 | Dec 2011 | WO |
2016160523 | Oct 2016 | WO |
2018044352 | Mar 2018 | WO |
2019241086 | Dec 2019 | WO |
2020041073 | Feb 2020 | WO |
2021196080 | Oct 2021 | WO |
2022026028 | Feb 2022 | WO |
2022204941 | Oct 2022 | WO |
Entry |
---|
Author Unknown, “OpenShift Container Platform 4.6,” Mar. 3, 2021, 41 pages, Red Hat, Inc. |
Author Unknown, “E-Security Begins with Sound Security Policies,” Jun. 14, 2001, 23 pages, Symantec Corporation. |
Darabseh, Ala, et al., “SDDC: A Software Defined Datacenter Experimental Framework,” Proceedings of the 2015 3rd International Conference on Future Internet of Things and Cloud, Aug. 24-26, 2015, 6 pages, IEEE Computer Society, Washington, D.C., USA. |
Non-published Commonly Owned U.S. Appl. No. 16/897,627, filed Jun. 10, 2020, 85 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 16/897,640, filed Jun. 10, 2020, 84 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 16/897,652, filed Jun. 10, 2020, 84 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 16/897,666, filed Jun. 10, 2020, 78 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 16/897,680, filed Jun. 10, 2020, 78 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 16/897,695, filed Jun. 10, 2020, 77 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 16/897,704, filed Jun. 10, 2020, 85 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 16/897,715, filed Jun. 10, 2020, 70 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 17/072,115, filed Oct. 16, 2020, 49 pages, VMware, Inc. |
Non-published commonly owned U.S. Appl. No. 17/112,689, filed Dec. 4, 2020, 63 pages, VMware, Inc. |
Non-published Commonly Owned U.S. Appl. No. 17/176,191, filed Feb. 16, 2021, 36 pages, VMware, Inc. |
Rouse, Margaret, “What is SDDC (software-defined data center)?—Definition from WhatIs.com,” Mar. 2017, 5 pages, TechTarget.com. |
Qi, Shixiong, et al., “Assessing Container Network Interface Plugins: Functionality, Performance, and Scalability,” IEEE Transactions on Network and Service Management, Mar. 2021, 16 pages, vol. 198, No. 1, IEEE. |
Abhashkumar, Anubhavnidhi, et al., “Supporting Diverse Dynamic Intent-based Policies Using Janus,” CoNEXT 17, Dec. 12-15, 2017, 14 pages, ACM, Incheon, KR. |
Abwnawar, Nasser, “A Policy-Based Management Approach to Security in Cloud Systems,” Feb. 2020, 184 pages, De Monfort University, Leicester, UK. |
Author Unknown, “Kubernetes Core Concepts for Azure Kubernetes Service (AKS)”, Jun. 3, 2019, 6 pages, retrieved from https://docs.microsoft.com/en-us/azure/aks/concepts-clusters-workloads. |
Chawla, Harsh, et al., “Building Microservices Applications on Microsoft Azure: Designing, Developing, Deploying, and Monitoring,” Month Unknown 2019, 271 pages, Harsh Chawla and Hemant Kathuria, India. |
Sayfan, Gigi, “Mastering Kubernetes: Automating container deployment and management,” May 2017, 426 pages, Packt Publishing, Birmingham, UK. |
Wodicka, Brent, “A Developer's Guide to Container Orchestration, Kubernetes, & AKS,” Mar. 19, 2019, 5 pages, AIS, Reston, VA, USA. |
Non-Published Commonly Owned U.S. Appl. No. 17/692,634, filed Mar. 11, 2022, 42 pages, VMware, Inc. |
Author Unknown, “NSX vSphere API Guide—NSX 6.2 for vSphere,” Jan. 6, 2017, 400 pages, VMware, Inc. |
Author Unknown, “Advanced Networking Features in Kubernetes and Container Bare Metal,” Document 606835-001, Dec. 2018, 42 pages, Intel Corporation. |
Author Unknown, “Chapter 4: Default Security Policy,” IBM Security Access Manager Version 9.0, Oct. 2015, 18 pages. |
Author Unknown, “Containers and Container Networking for Network Engineers: VMware NSX Container Networking,” Jan. 2018, 58 pages, VMware, Inc. |
Balla, David, et al., “Adaptive Scaling of Kubernetes Pods,” NOMS 2020—2020 IEEE/IFIP Network Operations and Management Symposium, Apr. 20-24, 2020, 5 pages, IEEE, Budapest, Hungary. |
Number | Date | Country | |
---|---|---|---|
20220038311 A1 | Feb 2022 | US |
Number | Date | Country | |
---|---|---|---|
63058490 | Jul 2020 | US |