Load balancers are commonly used in datacenters to spread the traffic load to a number of available computing resources that can handle a particular type of traffic. For instance, load balancers are topologically deployed at the edge of the network and between different types of VMs (e.g., between webservers and application servers, and between application servers and the database servers). The load balancers are in some deployments standalone machines (e.g., F5 machines) that perform load balancing functions. Also, in some deployments, the load balancers are service virtual machines (VMs) that execute on the same host computing devices that execute the different layers of servers that have their traffic balanced by the load balancers.
In many load balancer deployments, the load balancers serve as chokepoint locations in the network topology because they become network traffic bottlenecks as the traffic load increases. Also, these deployments do not seamlessly grow and shrink the number of the computing devices that receive the load balanced traffic, as the data traffic increases and decreases.
Some embodiments provide an elastic architecture for providing a service in a computing system. To perform a service on the data messages, the service architecture uses a service node (SN) group that includes one primary service node (PSN) and zero or more secondary service nodes (SSNs). The service can be performed on a data message by either the PSN or one of the SSN. However, in addition to performing the service, the PSN also performs a load balancing operation that assesses the load on each service node (i.e., on the PSN or each SSN), and based on this assessment, has the data messages distributed to the service node(s) in its SN group.
Based on the assessed load, the PSN in some embodiments also has one or more SSNs added to or removed from its SN group. In some embodiments, the PSN in some embodiments directs a set of controllers to add (e.g., instantiate or allocate) or remove an SSN to or from the SN group. Also, to assess the load on the service nodes, the PSN in some embodiments receives message load data from the controller set, which collects such data from each service node. In other embodiments, the PSN receives such load data directly from the SSNs.
As mentioned above, the PSN has the data messages distributed among the service nodes in its SN group based on its assessment of the message traffic load on the service nodes of the SN group. The PSN uses different techniques in different embodiments to distribute the data messages to the service node(s) in its group. In some embodiments, the PSN receives each data message for which the service has to be performed. In these embodiments, the PSN either performs the service on the data message, or re-directs the data message to an SSN to perform the service on the data message. To redirect the data messages, the PSN in different embodiments uses different techniques, such as MAC redirect (for L2 forwarding), IP destination network address translation (for L3 forwarding), port address translation (for L4 forwarding), L2/L3 tunneling, etc. In some embodiments, the PSN has a connection data store that maintains the identity of the service node that it previously identified for each data message flow, in order to ensure that data messages that are part of the same flow are directed to the same service node (i.e., to the PSN or the same SSN).
In other embodiments, the PSN configures a set of one or more front-end load balancers (FLBs) that receives the data messages before the PSN, so that the FLB set can direct the data messages to the PSN or the SSN. To configure the FLB set, the PSN in some embodiments receives the first data message of a new data message flow that is received by the FLB set so that the PSN can figure out how the new flow should be distributed. When such a data message has to be forwarded to a particular SSN, the PSN in some embodiments directs the data message to the SSN, and configures the FLB set to direct the data message's flow to the SSN. Before the configuration of the FLB set is completed, the PSN in some embodiments may have to receive data messages that are part of this flow (i.e., the flow that is directed to the particular SSN). In such situation, the PSN of some embodiments direct the data messages to the particular SSN, until the load balancer set can directly forward subsequent data messages of this flow to the particular SSN.
In other embodiments, the PSN configures the FLB set differently. For instance, in some embodiments, the PSN configures the FLB set by simply providing the identity (e.g., the MAC and/or IP address) of each service node in the SN group, and the FLB set uses its own load balancing scheme (e.g., a standard equal cost multipath, ECMP, scheme) to distribute the data message flows to the service nodes in the SN group in a stateful or stateless manner. In other embodiments, the PSN configures the FLB set by providing to the FLB set a load balancing parameter set that provides a particular scheme for the FLB set to use to distribute the data message flows to the service nodes in the SN group.
For example, in some embodiments, the PSN provides to the FLB set a hash table that defines multiple hash value ranges and a service node for each hash value range. In some such embodiments, a load balancer in the FLB set generates a hash value from a header parameter set of a data message flow, identifies the hash range (in the hash table) that contains the hash value, and selects for the data message flow the service node that is associated with the identified hash range. To make its flow distribution stateful, the load balancer in some embodiments stores the identity of the identified service node for the data message flow in a flow connection-state storage, which the load balancer can subsequently access to select the identified service node for subsequent data messages of the flow.
In some embodiments, the service nodes (PSN and SSNs), as well as some or all of the source compute nodes (SCNs) and destination compute nodes (DCNs) that send and receive messages to and from the service nodes, are machines (e.g., virtual machines (VMs) or containers) that execute on host computing devices. A host computing device in some embodiments can execute an arbitrary combination of SCNs, DCNs and service nodes. In some embodiments, the host also executes one or more software forwarding elements (e.g., software switches and/or software routers) to interconnect the machines that execute on the host and to interconnect these machines (through the network interface of the host and intervening forwarding elements outside of the host) with other SCNs, DCNs, and/or service nodes that operate outside of the host. In some embodiments, one or more SCNs, DCNs, and service nodes (PSN and SSNs) are standalone devices (i.e., are not machines that execute on a host computing device with other machines).
The elastic service architecture of some embodiments can be used to provide different services in a computer network. In some embodiments, the services can be any one of the traditional middlebox services, such as load balancing, firewall, intrusion detection, intrusion protection, network address translation (NAT), WAN (wide area network) optimizer, etc. When the service that is performed by the service node group is not load balancing, the PSN of the service node group (that includes the PSN and one or more SSNs) in some embodiments performs a load balancing service in addition to the service performed by all the service nodes in the group. As mentioned above, the PSN in some embodiments performs this load balancing service in order to ensure that the SN group's service is distributed among the service nodes of the group (i.e., in order to distribute the data message load among these service nodes). As described above, the PSN performs different load balancing operations in different embodiments. These operations range from re-directing data message flows directly to the SSNs in some embodiments, to configuring a FLB set to direct the data message flows to the service nodes in other embodiments.
In some cases, the SN group's service is load balancing. In these cases, the PSN performs two types of load balancing. The first type of load balancing is the same load balancing that is performed by all of the service nodes in the group, while the second type of load balancing is a load balancing operation that the PSN performs to ensure that the first type of load balancing is distributed among the group's service nodes (including the PSN). For instance, in some embodiments, the first type load balancing operation is based on L3, L4 and/or L7 parameters of the data messages, and each SN of the group performs this load balancing operation. In addition to performing this load balancing operation, the PSN in some embodiments also performs a second load balancing operation, which is an L2 load balancing operation (e.g., a load balancing operation that relies on the data message L2 parameters and on MAC redirect) that distribute the data messages (on which it does not perform the first type load balancing) to one or more other service nodes of the SN group.
In other embodiments, the first type load balancing operation is based on L4 and/or L7 parameters of the data messages. Each SN of the group performs this L4 and/or L7 load balancing operation. In addition, the PSN of some embodiments also performs an L2 and/or L3 load balancing operation (e.g., a load balancing operation that relies on the data message L3 parameters and IP address DNAT) to distribute the data messages (on which it does not perform the first type load balancing) to one or more other service nodes of the SN group.
In cases where the SN group's service is load balancing, the PSN second type of load balancing operation in some embodiments might not require the PSN to directly re-direct the data message flows to the SSN. For instance, in some embodiments, the PSN's second type load balancing might simply configure an FLB set to direct the data message flows to the service nodes. As mentioned above, the PSN can configure the FLB set differently in different embodiments, e.g., by providing to the FLB set only the SN group membership data, or providing to the FLB set a hash table that for each of several header-parameter, specifies hash-value ranges identifies a service node.
In some embodiments, the SSNs of a SN group also re-direct the data message flows that they receive. For example, in some embodiments, the PSN supplies to an FLB set a SN group update each time a service node is added to or removed from the group. In some such embodiments, each FLB in the FLB set distributes the data message flows in a stateless manner. Before such an FLB in the FLB set updates its distribution scheme based on the updated group membership, the FLB might send a new data message flow to a first service node based on the FLB's old distribution scheme. After this FLB updates its distribution scheme based on the updated group membership, the FLB might send the data message flow to the a second service node based on the FLB's new distribution scheme.
For such a case, the first service node needs to re-direct the data messages for the new flow to the second service node that needs to process these data messages based on the new distribution scheme. When the FLB set distributes data message flows based on its own load balancing distribution scheme, each service node needs to perform this load balancing distribution scheme so that they can predict the service node that should receive the new data message flow based on an updated SN group membership. When the FLB set distributes data message flows based on load balancing parameter (LBP) set provided by the PSN (e.g., based on the hash table provided by the PSN), each SSN in some embodiments either (1) obtains the LBP set form the PSN, or (2) performs the same load balancing operations as the PSN in order to independently derive the LBP set that the PSN will provide to the FLB set. In these embodiments, each SSN uses the LBP set in order to re-direct a new message flow to the correct service node when the FLB set forwards the message flow incorrectly to the SSN.
When the FLB set distributes data message flows in a stateless manner, a first service node (e.g., a PSN or an SSN) might also need to re-direct to a second service node an old data message flow that it receives from the FLB set, because the second service node has previously been processing the data message flow and the FLB set statelessly has begun forwarding the data message flow to the first service node based on an update that it has received from the PSN. To perform this re-direction, the service nodes in some embodiments synchronize in real-time flow connection-state data that identifies the flows that each of them is handling at any time. In some embodiments, the flow connection-state data is synchronized through control channel communication between the service nodes.
The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, Detailed Description, the Drawings and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, Detailed Description and the Drawing.
The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.
In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.
Some embodiments provide an elastic architecture for providing a service in a computing system. As used in this document, data messages refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term data message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc.
To perform a service on the data messages, the service architecture uses a service node (SN) group that includes one primary service node (PSN) and zero or more secondary service nodes (SSNs). The service can be performed on a data message by either the PSN or one of the SSN. In addition to performing its group's service, the PSN also performs a load balancing operation that assesses the load on each service node (i.e., on the PSN or each SSN), and based on this assessment, has the data messages distributed to the service node(s) in its SN group.
Based on the assessed load, the PSN in some embodiments also has one or more SSNs added to or removed from its SN group. To add or remove an SSN to or from the service node group, the PSN in some embodiments directs a set of controllers to add (e.g., instantiate or allocate) or remove the SSN to or from the SN group. Also, to assess the load on the service nodes, the PSN in some embodiments receives message load data from the controller set, which collects such data from each service node. In other embodiments, the PSN receives such load data directly from the SSNs.
The elastic service architecture of some embodiments can be used to provide different services in a computer network. In some embodiments, the services can be any one of the traditional middlebox services, such as load balancing, firewall, intrusion detection, intrusion protection, network address translation (NAT), WAN optimizer, etc. When the service that is performed by the service node group is not load balancing, the PSN of the service node group (that includes the PSN and one or more SSNs) in some embodiments performs a load balancing service in addition to the service performed by all the service nodes in the group. As mentioned above, the PSN performs this load balancing service in order to ensure that the SN group's service is distributed among the service nodes of the group. This load balancing service of the PSN is different in different embodiments. This service ranges from re-directing data message flows directly to the SSNs in some embodiments, to configuring a front-end load balancer (FLB) set to direct the data message flows to the service nodes in other embodiments, as further described below.
On the other hand, when the SN group's service is load balancing, the PSN performs two types of load balancing. The first type of load balancing is the same load balancing that is performed by all of the service nodes in the group, while the second type of load balancing is a load balancing operation that the PSN performs to ensure that the first type of load balancing is distributed among the group's service nodes (including the PSN).
For instance, in some embodiments, the first type load balancing operation is an L3, L4 and/or L7 load balancing operation, while the second type of load balancing operation is an L2 load balancing operation. In other embodiments, the first type load balancing operation is an L4 and/or L7 load balancing operation, while the second type of load balancing operation is an L2 and/or L3 load balancing operation. As used in this document, references to L2, L3, L4, and L7 layers are references respectively to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model.
In different embodiments, the PSN uses different techniques to distribute the data messages to one or more SSNs. In some embodiments, the PSN receives each data messages for which the service has to be performed, and either performs the service on the data message, or re-directs the data message to an SSN to perform the service on the data message. In other embodiments, the PSN configures an FLB set that receives the data messages before the PSN, so that the FLB set can direct the data messages to the PSN or the SSN. As further described below, the PSN can configure the FLB set differently in different embodiments, e.g., by providing to the FLB set only the SN group membership data, by configuring the FLB set for each flow, or by providing to the FLB set a hash table that identifies a service node for each of several header-parameter, hash-value ranges.
The service node group 150 has three service nodes, while the service node group 155 has two service nodes. In each of these groups, one service node is a primary service node, with each other node being a secondary service node. The two SN groups 150 and 155 can perform the same service operation (e.g., load balancing operation) or can perform two different service operations (e.g., a firewall operation for SN group 150 and a load balancing operation for SN group 155). However, even when the two SN groups perform the same service operation, the service operation of one SN group is distinct and independent from the service operation of the other SN group (e.g., the SN groups perform two different firewall operations).
Each CN group can include an arbitrary collection of compute nodes, or it can be a collection of a particular type of compute nodes. For instance, the CN groups 160, 165, and 170 in some deployments are a collection of web servers 160, application servers 165, and database server 170, while in other embodiments one CN group (e.g., group 165) includes a collection of different types of servers.
A host computing device (also referred to as a host) in some embodiments can execute an arbitrary combination of SCN, DCN and SN virtual machines.
In some embodiments, the VMs execute on top of a hypervisor, which is a software layer that enables the virtualization of the shared hardware resources of the host. In some of these embodiments, the hypervisor provides the SFE functionality on a host computing device, while in other embodiments, the forwarding element functionality is provided by another software module or hardware component (e.g., the network interface card) of the host computing device.
In some embodiments, each service node in an SN group maintains statistics regarding the message traffic load that it processes. Each service node in some embodiments forwards the collected statistics to a set of controllers, which aggregates these statistics and distributes aggregated load data to the PSN in the SN group. Alternatively, in some embodiments, the SSNs directly forward their collected statistics to the PSN.
In some embodiments, the PSN of a SN group uses the aggregated load data to control how the data message flows are directed to different service nodes in its group. In some embodiments, the aggregated load data are also used to determine when new service nodes should be added to or removed from a SN group. In some embodiments, each SSN also receives the load data from the controller set or from other service nodes, in order to compute the load balancing parameters that the PSN will compute.
In some embodiments, the hosts 205-215 are similar to the hosts 105-130 of
When a PSN executes on the agent's host, the SVM agent of some embodiments receives global load statistics or load balancing parameters from the controller set 225 to supply to any PSN that executes on its host. In some embodiments, the SVM agent receives aggregated statistics from the controller set, analyzes the aggregated statistics, and generates and/or to adjusts the load balancing parameters of the PSN that executes on the agent's hosts.
In some embodiments, the SVM agents are not used at all, or are used for only some of the above-described operations. For instance, in some embodiments, the PSN and SSN SVMs directly send their load statistic data to the controller set 235, and/or the PSN SVMs directly receive the global statistic data from the controller set 235. Also, in some embodiments, the SVM agents are not used to compute or adjust the load balancing parameters of the PSNs, as the PSN SVMs compute or adjust these values. In some embodiments, the SVM agents are not used to configure the SVMs. For instance, in some embodiments, the controller set 225 communicates directly with the SVMs to configure their operations.
As mentioned above, the controller set 225 in some embodiments receives load statistic data from the SVMs of each SN group, generates global load statistic data from the received data, and distributes the global load statistic data to the PSN of the SN group. In other embodiments, the SSNs send their load statistics data directly to the PSN of their group. In different embodiments, the SVMs provide the load statistic data (e.g., to the controller set or to the PSN) in terms of different metrics. Examples of such metrics include number of data message flows currently being processed, number of data messages processed within a particular time period, number of payload bytes in the processed messages, etc.
The controller set distributes the global load statistic data in different forms in different embodiments. In some embodiments, global load data is in the same format as the format that the controller set receives the load data from the service nodes, except that the global load data is an aggregation of the received statistic data from the different service nodes. In other embodiments, the controller set processes the load statistic data from the SVMs to produce processed global statistic data that is in a different format or is expressed in terms of different metrics than the load statistic data that it receives from the service nodes.
Based on the distributed global load statistic data, the PSN of a SN group in some embodiments generates load balancing parameter (LBP) set for distributing the data message flows (e.g., new data message flows) to the service nodes of the group. In some embodiments, the PSN then uses the LBP set to distribute the data message flows to the service nodes of its SN group, while in other embodiments, the PSN uses the LBP set to configure an FLB set to distribute the data message flows.
As mentioned above, even in the embodiments that the PSN configures the FLB set, the PSN in some embodiments uses the LBP set to distribute the data message flows (e.g., because the FLB set statelessly distributes the load or has not yet reconfigured for a new LBP set that is provided by the PSN). In some embodiments, an SSN might also have to distribute the data message flows during this interim time period for similar reasons. To do this, each SSN of a SN group would have to receive the global load statistic data from the controller set, or the global load statistic data or LBP set from the PSN (directly from the PSN or indirectly through the controller set).
Instead of distributing global load statistic data, the controller set 225 of some embodiments generates LBP set from the statistic data that it receives from the service nodes of an SN group, and distributes the load balancing parameter set to the PSN of the SN group. In some embodiments, the PSN then uses this load balancing parameter set to distribute the data message flows to the service nodes of its SN group, while in other embodiments, the PSN uses the load balancing parameter set to configure an FLB set to distribute the data message flows. Again, in some cases (e.g., because the FLB set statelessly distributes the load or has not yet reconfigured for a new LBP set that is provided by the PSN), the PSN in some embodiments might have to use the load balancing parameter set to distribute the data message flows. In some embodiments, an SSN might also have to distribute the data message flows for the same reasons, and for this, the SSN would have to receive LBP set from the controller set.
In addition to distributing global load statistic data and/or load balancing parameters, the controller set 225 in some embodiments also adds service nodes to an SN group, or removes service nodes from the SN group, based on the monitored load on the service nodes in the SN group. In some embodiments, the controller set 225 adds or removes a service node based on its own determination, while in other embodiments the controller set adds or removes a service node in response to a request from the PSN of the SN group. In some embodiments, the controller set 225 adds a service node by instantiating a new SVM and adding this SVM to the SN group. In other embodiments, the controller set 225 adds the service node by allocating a previously instantiated SVM to the SN group.
In some embodiments, the controller set 225 provide control and management functionality for defining (e.g., allocating or instantiating) and managing one or more VMs on the host computing devices 205-215. The controller set 225 also provide control and management functionality for defining and managing multiple logical networks that are defined on the common software forwarding elements of the hosts. In some embodiments, the controller set 225 includes multiple different sets of one or more controllers for performing different sets of the above-described controller operations.
As shown in
After receiving the data message, the process determines (at 310) whether the received message is part of a particular data message flow for which the PSN has previously processed at least one data message. To make this determination, the process examines (at 310) a flow connection-state data storage that stores (1) the identity of each of several data message flows that the PSN previously processed, and (2) the identity of the service node that the PSN previously identified as the service node for processing the data messages of each identified flow. In some embodiments, the process identifies each flow in the connection-state data storage in terms of one or more flow attributes, e.g., the flow's five tuple header values, which are the source IP address, destination IP address, source port, destination port, and protocol. Also, in some embodiments, the connection-state data storage is hash indexed based on the hash of the flow attributes (e.g., of the flow's five tuple header values). For such a storage, the PSN generates a hash value from the header parameter set of a data message, and then uses this hash value to identify one or more locations in the storage to examine for a matching header parameter set (i.e., for a matching data message flow attribute set).
When the process identifies (at 310) an entry in the flow connection-state data storage that matches the received data message flow's attributes (i.e., when the process determines that it previously processed another data message that is part of the same flow as the received data message), the process directs (at 315) the received data message to the service node (in the SN group) that is identified in the matching entry of the connection-state data storage (i.e., to the service node that the PSN previously identified for processing the data messages of the particular data message flow). This service node then performs the service on the data message, and augments the statistics that it maintains (e.g., the data message count, the byte count, etc.) regarding the data messages that it processes. This service node can be the PSN itself, or it can be an SSN in the SN group. After 315, the process ends.
On the other hand, when the process determines (at 310) that the connection-state data storage does not store an entry for the received data message (i.e., determines that it previously did not process another data message that is part of the same flow as the received data message), the process transitions to 320. In some embodiments, the connection-state data storage periodically removes old entries that have not matched any received data messages in a given duration of time. Accordingly, in some embodiments, when the process determines (at 310) that the connection-state data storage does not store an entry for the received data message, the process may have previously identified a service node for the data message's flow, but the matching entry might have been removed from the connection-state data storage.
At 320, the process determines whether the received data message should be processed locally by the PSN, or remotely by another service node of the SN group. To make this determination, the PSN in some embodiments performs a load balancing operation that identifies the service node for the received data message flow based, based on the load balancing parameter set that the PSN maintains for the SN group at the time that the data message is received. As mentioned before, the load balancing parameter set is adjusted in some embodiments (1) based on updated statistic data regarding the traffic load on each service node in the SN group, and (2) based on service nodes that are added to or removed from the SN group.
The process 300 performs different load balancing operations (at 320) in different embodiments. In some embodiments, the load balancing operation relies on L2 parameters of the data message flows (e.g., generates hash values form the L2 parameters, such as source MAC addresses, to identify hash ranges that specify service nodes for the generated hash values) to distribute the data messages to service nodes, while in other embodiments, the load balancing operations relies on L3/L4 parameters of the flows (e.g., generates hash values form the L3/L4 parameters, such as five tuple header values, to identify hash ranges that specify service nodes for the generated hash values) to distribute the data messages to service nodes. In yet other embodiments, the load balancing operations (at 320) use different techniques (e.g., round robin techniques) to distribute the load amongst the service nodes.
When the process determines (at 320) that the PSN should process the received data message, the process directs (at 325) a service module of the PSN to perform the SN group's service on the received data message. Based on this operation, the PSN's service module also augments (at 325) the statistics that it maintains (e.g., the data message count, the byte count, etc.) regarding the data messages that the PSN processes. At 325, the process 300 also creates an entry in the flow connection-state data storage to identify the PSN as the service node for processing data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies the PSN and identifies the received data message header values (e.g., five tuple values) that specify the message's flow. After 325, the process ends.
When the process determines (at 320) that based on its load balancing parameter set, the PSN should not process the received data message, the process identifies (at 320) another service node in the PSN's SN group to perform the service on the data message. Thus, in this situation, the process directs (at 330) the message to another service node in the PSN's SN group. To redirect the data messages, the PSN in different embodiments uses different techniques, such as MAC redirect (for L2 forwarding), IP destination network address translation (for L3 forwarding), port address translation (for L4 forwarding), L2/L3 tunneling, etc.
To perform MAC redirect, the process 300 in some embodiments changes the MAC address to a MAC address of the service node that it identifies at 320. For instance, in some embodiments, the process changes the MAC address to a MAC address of another SFE port in a port group that contains the SFE port connected with the PSN. More specifically, in some embodiments, the service nodes (e.g., SVMs) of a SN group are assigned ports of one port group that can be specified on the same host or different hosts. In some such embodiments, when the PSN wants to redirect the data message to another service node, it replaces the MAC address of the PSN's port in the data message with the MAC address of the port of the other service node, and then provides this data message to the SFE so that the SFE can forward it directly or indirectly (through other intervening forwarding elements) to the port of the other service node.
Similarly, to redirect the data message to the other service node through IP destination network address translation (DNAT), the PSN replaces the destination IP address in the data message to the destination IP address of the other service node, and then provides this data message to the SFE so that the SFE can forward it directly or indirectly (through other intervening forwarding elements) to the other service node. In some embodiments, the initial destination IP address in the data message that gets replaced is the VIP of the SN group. This VIP in some embodiments is the IP address of the PSN.
To redirect the data message to the other service node through port address translation, the PSN replaces the destination port address in the data message to the destination port address of the other service node, and then uses this new port address to direct the data message to the other service node. In some embodiments, the PSN's network address translation may include changes to two or more of the MAC address, IP address, and port address.
After directing (at 330) the data message to the other service node, the process creates (at 335) an entry in the connection-state data storage to identify the other service node as the service node for processing data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies (1) the other service node and (2) the received data message header values (e.g., five tuple values) that specify the message's flow. After 335, the process ends.
In some embodiments, the destination node for a data message after a service node performs a service on the data message is the source compute node that sent the data message directly or indirectly to the service node group. In other embodiments, the service node is deployed at the edge of a network, and the destination node for a data message that a service node processes, is the compute node or forwarding element inside or outside of the network to which the service node is configured to send its processed messages. In still other embodiments, the service node identifies the destination node for a data message that it processes based on the data message's header parameters and based on the service node's configured rules that control its operation.
The second stage 410 illustrates that a time T2, the SN group has been expanded to include another service node, SSN1, which is implemented by a second service virtual machine, SVM2. In some embodiments, the service node SSN1 is added to the group because the data message load on the group has exceeded a first threshold value. The controller set 225 in some embodiments adds SSN1 when it detects that the data message load has exceeded the first threshold value, or when the PSN detects this condition and directs the controller set to add SSN1. To assess whether the data message load exceeds a threshold value, the controller set or PSN in different embodiments quantify the data message load based on different metrics. In some embodiments, these metrics include one or more of the following parameters: (1) number of flows being processed by the SN group or by individual service nodes in the group, (2) number of packets being processed by the SN group or by individual service nodes in the group, (3) amount of packet data being processed by the SN group or by individual service nodes in the group.
The second stage 410 also illustrates that time T2 the PSN performs the SN group's service on some of the data message flows, while directing other data message flows to SSN1 so that this service node can perform this service on these other flows. As shown, once either the PSN or SSN1 performs the service on a data message, the PSN or SSN1 directs the data message to one of the destination compute nodes that should receive the data message after the SN group processes them. As shown in
The third stage 415 illustrates that a time T3, the SN group has been expanded to include yet another service node, SSN2, which is a third service virtual machine, SVM3. In some embodiments, the service node SSN2 is added to the group because the data message load on the group, or on SVM1 and/or SVM2, has exceeded a second threshold value, which is the same as the first threshold value in some embodiments or is different than the first threshold value in other embodiments. As before, the controller set 225 in some embodiments adds SSN2 when it or the PSN detects that the data message load has exceeded the second threshold value. The third stage 415 also illustrates that time T3, the PSN performs the SN group's service on some of the data message flows, while directing other data message flows to SSN1 or SSN2, so that these service nodes can perform this service on these other flows. As shown, once any of the service nodes, PSN, SSN1, or SSN2, performs the service on a data message, the service node directs the data message to one of the destination compute nodes that should receive the data message after the SN group processes them.
In some embodiments, the SSNs of one SN group are PSNs or SSNs of another SN group.
In the first stage 510, PSN1 receives all data messages on which the SN group 500 has to perform its service, performs this service on these messages, and then directs these messages to a first set of destination compute nodes 525. Similarly, in this stage, PSN2 receives all data messages on which the SN group 505 has to perform its service, performs this service on these messages, and then directs these messages to a second set of destination compute nodes 530, which is different than the first set of compute nodes 525.
The second stage 515 illustrates that a time T2, the SN group 500 has been expanded to include SVM2 as a service node SSN1. Accordingly, at this stage, SVM2 performs the service operations of PSN1 of SN group 505, and the service operations of SSN1 of SN group 500. In some embodiments, the controller set or PSN1 decides to add SVM2 as service node SSN1 to SN group 500 because the data message load on this group (i.e., on PSN1) has exceeded a first threshold value (as detected by the controller set or the PSN1) and SVM2 has excess capacity to handle service operations for SN group 500.
The second stage 515 also illustrates that time T2 the PSN1 performs the service of SN group 500 on some of the data message flows, while directing other data message flows to SVM2 so that SVM2 can perform the service of group 500 on these other flows. At this stage, the SVM2 not only performs the service of group 500 on the flows passed by the PSN1, but also performs the service of group 505 on the message flows that it receives for group 505. Once either the SVM1 or SVM2 performs the service of group 500 on a data message, the SVM directs the data message to one of the first set of destination compute nodes 525. Also, once SVM2 performs the service of group 505 on a data message, this SVM directs the data message to one of the second set of destination compute nodes 530.
Some embodiments do not allow one SN group to add an underutilized SVM of another SN group (i.e., to use the excess capacity of another service node group's underutilized SVM). However, some of these embodiments allow one SN group to add a service node by instantiating or utilizing a new SVM on a host that executes the PSN or SSN of another SN group. In this manner, these embodiments allow one SN group to capture the underutilized computational capacity of another group's host.
In the example illustrated in
In some embodiments, the service of a SN group is load balancing traffic that a set of SCNs sends to a set of two or more DCNs. In such cases, the SN group's PSN performs two types of load balancing. The first type of load balancing is the same load balancing that is performed by all of the service nodes in the group, while the second type of load balancing is a load balancing operation that the PSN performs to ensure that the first type of load balancing is distributed among the group's service nodes (including the PSN).
For instance, in some embodiments, the first type load balancing operation is an L3, L4 and/or L7 load balancing operation, while the second type of load balancing operation is an L2 load balancing operation. In other embodiments, the first type load balancing operation is an L4 and/or L7 load balancing operation, while the second type of load balancing operation is an L2 and/or L3 load balancing operation. An LN load balancing operation distributes the load amongst the DCNs based on LN header parameters of the data messages, where N is an integer that can be 2, 3, 4, or 7. When a load balancing that is based on different layer parameters, the load balancing operation distributes the load amongst the DCNs based on different layer header parameters. For example, when the load balancing is based on L2 and L3 header values, the load balancer in some embodiments generates a hash of the L2 and L3 header values of the data message flow and identifies a DCN for the data message flow based on the L2 and L3 header values. Alternatively, for such an example, the load balancer in some embodiments uses the flow's L2 and L3 header values to identify a load balancing rule that provides load balancing criteria for selecting a DCN for the data message flow (e.g., by using the criteria to pick the DCN in a round robin manner).
As shown in
After receiving the data message, the process determines (at 710) whether the received message is part of a particular data message flow for which the PSN has previously processed at least one data message. To make this determination, the process examines (at 710) a flow connection-state data storage that stores (1) the identity of each of several data message flows that the PSN previously processed, and (2) the identity of the load balancer that the PSN previously identified as the load balancer for processing the data messages of each identified flow. In some embodiments, the process 700 identifies each flow in the connection-state data storage in terms of one or more flow attributes, e.g., the flow's five tuple header values. Also, in some embodiments, the connection-state data storage is hash indexed based on the hash of the flow attributes (e.g., of the flow's five tuple header values).
When the process identifies (at 710) an entry in the connection-state data storage that matches the received data message flow's attributes (i.e., when the process determines that it previously processed another data message that is part of the same flow as the received data message), the process directs (at 715) the received data message to the load balancer (in the SN group) that is identified in the matching entry of the connection-state data storage (i.e., to the load balancer that the PSN previously identified for processing the data messages of the particular data message flow). This load balancer then performs the first type of load balancing operation on the data message to direct the received data message to one compute node in the DCN set. This load balancer also augments the statistics that it maintains (e.g., the data message count, the byte count, etc.) regarding the data messages that it processes. This load balancer can be the PSN itself, or it can be an SSN in the SN group. After 715, the process ends.
On the other hand, when the process determines (at 710) that the connection-state data storage does not store an entry for the received data message (i.e., determines that it previously did not process another data message that is part of the same flow as the received data message), the process determines (at 720) whether the received data message should be processed locally by the PSN, or remotely by another load balancer of the SN group. To make this determination, the PSN in some embodiments performs the second type of load balancing operation that relies on a second set of load balancing parameters that the PSN maintains for the SN group at the time that the data message is received.
The second type of load balancing operation is based on different load balancing parameter sets in different embodiments. For instance, in some embodiments, the second type of load balancing operation is an L2 load balancing operation that relies on load balancing parameter set that are defined in terms of L2 parameters. In other embodiments, the second type of load balancing operation is an L2 and/or L3 load balancing operation that relies on load balancing parameter set that are defined in terms of L2 and/or L3 parameters. As mentioned before, the load balancing parameter set is adjusted in some embodiments (1) based on updated statistic data regarding the traffic load on each load balancer in the SN group, and (2) based on load balancers that are added to or removed from the SN group.
When the process determines (at 720) that the PSN should process the received data message, the process directs (at 725) a load balancer module of the PSN to perform the first type of load balancing operation on the received data message. The first type of load balancing operation relies on a first set of load balancing parameter that the PSN maintains for the DCN group at the time that the data message is received.
The first type of load balancing operation is based on different load balancing parameter sets in different embodiments. For instance, in some embodiments, the first type load balancing operation is an L3, L4 and/or L7 load balancing operation and the load balancing parameter set is defined in terms of L3, L4 and/or L7 parameters. In other embodiments, the first type load balancing operation is an L4 and/or L7 load balancing operation and the load balancing parameter set is defined in terms of L4 and/or L7 parameters.
Also, in some embodiments, an LB parameter set includes load balancing criteria that the load balancer uses to select a destination for the message (e.g., to select a destination in a weighted round robin fashion). In other embodiments, an LB parameter set includes a hash table that specifies several hash value ranges and a destination for each hash value range. The load balancer generates a hash value from a set of header values (e.g., the L3, L4 and/or L7 parameter) of a data message, and then selects for the message the destination that is associated with the hash-value range that contains the generated hash value. Some embodiments uses the same load balancing approaches (e.g., hashing approaches) for the first and second load balancing operations of the PSN, while other embodiments uses different load balancing approaches (e.g., a hashing approach and a round robin approach) sets for these load balancing operations of the PSN.
At 725, the PSN also augments the statistics that it maintains (e.g., the data message count, the byte count, etc.) regarding the data messages that it distributes to the DCN identified at 725. At 725, the process 700 also creates an entry in the connection-state data storage to identify the PSN as the load balancer for performing the first type of load balancing operation on the data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies the PSN and identifies the received data message header values (e.g., five tuple values) that specify the message's flow. After 725, the process ends.
When the process determines (at 720) that based on its second set of load balancing parameters, the PSN should not distribute the received data message to one of the DCNs, the process identifies (at 720) another load balancer in the PSN's SN group to distribute the data message to a DCN. Thus, in this situation, the process directs (at 730) the message to another load balancer in the PSN's SN group. To redirect the data messages, the PSN in different embodiments uses different techniques, such as MAC redirect (for L2 forwarding), IP destination network address translation (for L3 forwarding), port address translation (for L4 forwarding), L2/L3 tunneling, etc. These techniques were described above by reference to operation 330 of the process 300 of
After directing (at 730) the data message to the other load balancer, the process creates (at 735) an entry in the connection-state data storage to identify the other load balancer as the service node for load balancing the data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies (1) the other service node and (2) the received data message header values (e.g., five tuple values) that specify the message's flow. After 735, the process ends.
As mentioned above, the PSN's distribution of the data messages to other load balancers in its load balancing service group is based on the second set of load balancing parameters that is adjusted based on message load data aggregated and distributed by the controller set in some embodiments. In some embodiments, the data aggregated and distributed by the controller set also updates the first set of load balancing parameters that the load balancers in the PSN's load balancer group use to distribute the data messages amongst the DCNs in the DCN group. Examples of modifying such load balancing operations based on dynamically gathered and updated message load data is described in U.S. patent application Ser. No. 14/557,287 now issued as U.S. Pat. No. 10,320,679.
In
The second stage 810 illustrates that a time T2, the service group 800 has been expanded to include a service node SSN1, which in this example is a load balancer LB 1_2. In some embodiments, the LB 1_2 is added to the group because the data message load on the group has exceeded a first threshold value. The controller set 225 in some embodiments adds LB 1_2 when it detects that the data message load has exceeded the first threshold value, or when the PSN detects this condition and directs the controller set to add this secondary service node. To assess whether the data message load exceeds a threshold value, the controller set or PSN in different embodiments quantify the data message load based on different metrics, such as the metrics described above (e.g., by reference to
The second stage 810 also illustrates that at time T2, the LB 1_1 performs the group's load balancing on some of the data message flows, while directing other data message flows to LB 1_2 so that this load balancer can perform this service on these other flows. As shown, the first type load balancing operation that either the LB 1_1 or LB 1_2 performs on a data message, directs the data message to one of the compute nodes in the DCN group 825. As shown in
The third stage 815 illustrates that a time T3, the SN group has been expanded to include yet another service node SSN2, which in this example is a load balancer LB 1_3. In some embodiments, the load balancer LB 1_3 is added to the group because the data message load on the group or on the PSN or SSN1 has exceeded a second threshold value. As before, the controller set 225 in some embodiments adds LB 1_3 when it or the PSN detects that the data message load has exceeded the second threshold value. The third stage 815 also illustrates that at time T3, the PSN distributes some of the data message flows among the DCNs, while directing other data message flows to LB 1_2 and LB 1_3 so that these load balancers can distribute these other flows among the DCNs.
Instead of relying on the SN group's PSN to distribute directly the data messages among the service nodes of the SN group, some embodiments use one or more front-end load balancers to do this task. For general purpose service nodes,
After sending (at 945) the configuration data to the FLB set, the PSN might continue to receive data messages for a data message flow that should be directed to another service node because the FLB set has not yet been reconfigured based on the sent configuration data, and therefore continues to send data messages of the redirected flow to the PSN. In a subsequent iteration for a data message of a flow that should be directed to another service node, the process forwards the data message to the other service node at 315.
The second stage 1010 illustrates that at a time T2, the SN group 1000 has been expanded to include another service node, SSN1, which is the service virtual machine SVM2. In some embodiments, the service node SSN1 is added to the group because the data message load on the group has exceeded a first threshold value, as quantified by a set of metrics (such as those described above by reference to
The second stage 1010 also illustrates that at time T2, the PSN configures the load balancer to direct some of the flows to the PSN while directing other flows to SSN1. Because of this configuration, the PSN performs the SN group's service on some of the data message flows, while SSN1 performs this service on other data message flows. The second stage 1010 also shows that the load balancer LB of the PSN 1020 directs some of the data message flows to SSN1 for this service node to process. These directed messages are those that the SSN1 has to process, but the PSN receives because the front-end load balancer 1050 has not yet been configured to forward these data messages to SSN1. As shown, once either the PSN or SSN1 performs the service on a data message, the PSN or SSN1 directs the data message to one of the destination compute nodes 1025 that should receive the data message after the SN group processes them.
The third stage 1015 illustrates that at time T3, the SN group 1000 has been expanded to include yet another service node, SSN2, which is the service virtual machine SVM2. In some embodiments, the service node SSN2 is added to the group because the data message load on the group, or on PSN and/or SSN1, has exceeded a second threshold value, as quantified by a set of metrics like those described above. As before, the controller set 225 in some embodiments adds SSN2 when it or the PSN detects that the data message load has exceeded the second threshold value.
The third stage 1015 also illustrates that at time T3, the PSN configures the load balancer 1050 to distribute the flows amongst all the SN group members, i.e., amongst PSN, SSN1, and SSN2. Because of this configuration, the PSN performs the SN group's service on some of the data message flows, SSN1 performs this service on other data message flows, and SSN2 performs this service on yet other data message flows. As shown, once the PSN, SSN1 or SSN2 performs the service on a data message, the PSN, SSN1 or SSN2 directs the data message to one of the destination compute nodes that should receive the data message after the SN group processes them.
The third stage 1015 also shows that the load balancer LB of the PSN 1020 directs some of the data message flows to SSN1 and SSN2 for these service nodes to process. These directed messages are those that SSN1 or SSN2 has to process, but the PSN receives because the front-end load balancer 1050 has not yet been configured to forward to these data messages to SSN1 or SSN2. In other embodiments, the PSN's load balancer does not direct the data message flows to SSNs during the second and third stages 1010 and 1015. For instance, in some embodiments, the FLB set queues a new data message flow until it receives instructions from the PSN as to which service node should process the new data message flow. In other embodiments, the PSN's load balancer does not direct the data messages to other SSNs because the FLB set statefully distributes the data message flows to the service nodes, as further explained below.
In the example illustrated in
The PSNs of different embodiments provide different LBP sets to their FLB sets. For instance, in some embodiments, the distributed LBP set includes the SN group membership (e.g., the network address (L2 and/or L3 address) of each service node in the SN group). In these embodiments, the FLB uses its own load balancing scheme (e.g., its own equal cost multipath, ECMP, process) to distribute the data message flows amongst the service nodes of the SN group. For instance, in some embodiments, the FLB set's ECMP process generates hash ranges based on the SN group membership that the PSN provides, and then uses the generated hash ranges to distribute the data message flows amongst the service nodes.
In other embodiments, the PSN's distributes LBP set includes the SN group membership and a distribution scheme for the FLB to use to distribute flows across the service nodes of the SN group. For instance, in some embodiments, the PSN provides to the FLB a hash table that identifies each service node of a SN group and specifies a hash range for each service node of the group. In some embodiment, the hash table (e.g., a hash table the PSN generates for itself or for an FLB) can specify the same destination node (e.g., the same service node) for two or more contiguous or non-contiguous hash ranges specified by the hash table.
The FLB generates a hash value for each flow (e.g., from the flow's five tuple), and then uses the PSN-provided hash value to identify the service node for the flow (i.e., identifies the hash range in the supplied table that contains the generated hash value, and then identifies the service node associated with the identified hash range). In some embodiments, each time that the SN group membership changes, the PSN distributes to the FLB set a new LBP set, which may include (i) an updated group membership to the FLB set and/or (ii) an updated distribution scheme (e.g., an updated hash table). Also, in some embodiments, each time that the PSN determines that the load distribution has to be modified amongst the existing service nodes of the SN group, the PSN distributes an updated LBP set to the FLB set to modify the FLB set's distribution of the data message flows amongst the service nodes of the SN group. In some embodiments, an updated LBP set includes an updated hash table, which may have more hash ranges or new service nodes for previously specified hash ranges.
In some embodiments, a front-end load balancer and a PSN use stateful load balancing processes that ensure that flows that were previously processed with a service node, remain with that node even after a new service node is added to the SN group. This is because without the stateful nature of these load balancing processes, a flow that was processed by one service node might get directed to a new service node because the addition of a new node might affect a load balancing scheme or a load balancing computation (e.g., a hash computation) that the load balancers use to distribute the data message flows amongst the service nodes of the group. One way that an FLB or PSN ensures stateful load balancing in some embodiments is to use a flow connection-state storage that stores the identity of the service node for a previously processed flow.
In other embodiments, the FLB set uses a stateless load balancing scheme. For instance, in some embodiments, the FLB is a simple forwarding element (e.g., hardware or software switch) that receives the SN group membership, defines several hash value ranges and their associated service nodes based on the number of service nodes, and then performs a stateless ECMP process that distributes the data messages as it receives by generating hashes of the message header values and determining the hash ranges that contain the generated hashes. In other embodiments, the FLB is a forwarding element (e.g., software or hardware switch) that (1) receives from the PSN a hash table containing several hash value ranges and their associated service nodes, and (2) performs a stateless ECMP process that distributes the data messages as it receives them by generating hashes of the message header values and determining the hash ranges that contain the generated hashes.
The FLB in either of these approaches does not maintain the flow connection states. Whenever the PSN provides a new LBP set (e.g., new SN group membership and/or distribution scheme) in either of these stateless approaches, the FLB may forward a flow that was previously processed by one service node to another service node. To avoid different service nodes from processing the same flow, the service nodes of the SN group of some embodiments synchronize their flow connection states (e.g., through control channel communications) so that when an FLB forwards an old flow that was handled by a first service node to a second service node, the second service node can detect that the first service node was processing this flow and re-direct the flow to the second service node. In some embodiments, a service node may also re-direct a new flow to another service node when the flow was not previously processed by the other service node. For example, in some cases, the service node detects that although the flow is new and has not been processed by any other service node, it should be handled by another service node once the FLB set reconfigures based on an updated LBP set that the PSN distributes.
To determine when flows need to be re-directed, each service node (i.e., the PSN and each SSN) in a SN group in some embodiments includes a load balancer that performs the secondary load balancing operation to direct messages to other service nodes.
However, in the example illustrated in
In the example of
One example of a data message flow that a first service node re-directs to a second service node includes a flow that was previously processed by the second service node but that after the LBP set updated, gets forwarded to the first service node by the FLB's stateless load balancing. To identify such a flow, the service nodes of some embodiments synchronize their flow connection states (e.g., through control channel communications). Another example for re-directing a data message flow is when the FLB has not yet reconfigured its operations based on a new LBP set from the PSN and forwards a new flow to a service node that determines based on the new LBP set another service node should process this new flow. To identify the need for such a re-direction, the SSNs in some embodiments obtain the LBP set updates from the PSN, or derive the LBP set updates independently of the PSN by using similar update processes. To derive the LBP set updates independently, the SSNs receive the same global statistics (from the controller set or from the other service nodes) as the PSN in some embodiments.
As shown in
The flow connection-state storage includes the flows that are being currently processed by all the service nodes. To maintain this storage, the service nodes synchronize the records in the connection-state storage on a real-time basis in some embodiments. This synchronization is through control channel communications in some embodiments. Also, in some embodiments, the process identifies each flow in the connection-state data storage in terms of one or more flow attributes, e.g., the flow's five tuple header values. As mentioned above, the connection-state data storages in some embodiments are hash indexed storages.
When the process identifies (at 1210) an entry in the connection-state data storage that matches the received data message flow's attributes (i.e., when it determines that the data message flow has been previously processed by one of the service nodes), the process then determines (at 1212) the identity of the service node that should process the data message from the matching connection-state data storage entry. When this matching entry specifies that another service node should process the received message, the process then directs (at 1214) the received data message to this other service node and then ends. The process 1200 re-directs the data messages to the other service node using one of the approaches mentioned above (e.g., MAC redirect, destination network address translation, etc.).
When the process determines (at 1212) that the matching entry identifies the process' associated SVM (e.g., SVM1 of PSN or SVM2 of SSN1 in
When the process determines (at 1210) that the connection-state data storage does not store an entry for the received data message (i.e., determines that the received data message flow is not a flow currently being processed by any service node), the process determines (at 1220) whether the received data message should be processed locally by its SVM, or remotely by another service node of the SN group. In some embodiments, another service node should process the received data message's flow when the LBP set (e.g., SN group membership) has changed but the FLB set has yet to complete its reconfiguration for a new distribution scheme that accounts for LBP set update (e.g., an addition or removal of a service node to the SN group).
To make the determination at 1220, the process needs to know of the LBP set update. When the process is performed by an SSN, the SSN would have to receive the LBP set update from the PSN, or would have to independently derive the LBP set update by using similar processes and similar input data as the PSN. In some embodiments, the LBP set update identifies the service node that should process a new data message flow. In other embodiments, the FLB set uses the LBP set to derive its load distribution scheme (e.g., to derive the hash values for its ECMP distribution scheme). For these embodiments, a service node would need to generate a load distribution scheme (e.g., to generate a hash table) from the LBP set update in the same manner as the FLB set, and then use this generated distribution scheme to identify the service node that should receive a new data message load (e.g., to identify the service node associated with a hash table range that contains a hash that is derived from the data message's header values).
When the process determines (at 1220) that its associated SVM should process the received data message, the process directs (at 1225) its SVM to perform the SN group's service on the received data message. Based on this operation, the SVM also augments (at 1225) the statistics that it maintains (e.g., the data message count, the byte count, etc.) regarding the data messages that it processes. At 1225, the process 1200 also creates an entry in the connection-state data storage to identify its SVM as the service node for processing data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies the SVM and identifies the received data message header values (e.g., five tuple values) that specify the message's flow. After 1225, the process ends.
When the process determines (at 1220) that another service node should process the data message, the process directs (at 1230) the message to another service node in the SN group. To redirect the data messages, the process 1200 in different embodiments uses different techniques, such as MAC redirect (for L2 forwarding), IP destination network address translation (for L3 forwarding), port address translation (for L4 forwarding), L2/L3 tunneling, etc. These operations were described above by reference to
After directing (at 1230) the data message to the other service node, the process creates (at 1235) an entry in the connection-state data storage to identify the other service node as the service node for processing data messages that are part of the same flow as the received data message. In some embodiments, this entry identifies (1) the other service node and (2) the received data message header values (e.g., five tuple values) that specify the message's flow. After 1235, the process ends
In the example illustrated in
As mentioned above, the distribution of the data message load to the DCNs is referred to as the first type load balancing while the distribution of the data message load to the group's service nodes (so that each can perform the first type load balancing) is referred to as the second type load balancing. In some embodiments, the PSN's second type load balancing in the system of
In some embodiments, one SVM performs both the load balancing operation and the service operation (which may be a non-load balancing service or a load balancing service) of a PSN or an SSN. However, in other embodiments, the two operations of such a service node (e.g., of a PSN, or of an SSN in the cases where the SSN performs a load balancing operation and another service) are performed by two different modules of the service node's associated host.
In some of these embodiments, the service node's service operation is performed by an SVM, while the service node's load balancing operation is performed by a load balancer that intercepts data messages from the datapath to the SVM. One such approach is illustrated in
In addition to the SVMs 1405 and load balancers 1415, the host 1400 executes one or more GVMs 1402, a software forwarding element 1410, an LB agent 1420, and a publisher 1422. The host also has LB rule storage 1440 and the STATs data storage 1445, as well as group membership data storage 1484, policy data storage 1482, aggregated statistics data storage 1486, and connection state storage 1490.
The software forwarding element (SFE) 1410 executes on the host to communicatively couple the VMs of the host to each other and to other devices outside of the host (e.g., other VMs on other hosts) through the host's physical NIC (PNIC) and one or more forwarding elements (e.g., switches and/or routers) that operate outside of the host. As shown, the SFE 1410 includes a port 1430 to connect to a PNIC (not shown) of the host. For each VM, the SFE also includes a port 1435 to connect to the VM's VNIC 1425. In some embodiments, the VNICs are software abstractions of the PNIC that are implemented by the virtualization software (e.g., by a hypervisor). Each VNIC is responsible for exchanging packets between its VM and the SFE 1410 through its corresponding SFE port. As shown, a VM's egress datapath for its data messages includes (1) the VM's VNIC 1425, (2) the SFE port 1435 that connects to this VNIC, (3) the SFE 1410, and (4) the SFE port 1430 that connects to the host's PNIC. The VM's ingress datapath is the same except in the revere order (i.e., first the port 1430, then the SFE 1410, then the port 1435, and finally the VNIC 1425.
In some embodiments, the SFE 1410 is a software switch, while in other embodiments it is a software router or a combined software switch/router. The SFE 1410 in some embodiments implements one or more logical forwarding elements (e.g., logical switches or logical routers) with SFEs executing on other hosts in a multi-host environment. A logical forwarding element in some embodiments can span multiple hosts to connect VMs that execute on different hosts but belong to one logical network. In other words, different logical forwarding elements can be defined to specify different logical networks for different users, and each logical forwarding element can be defined by multiple SFEs on multiple hosts. Each logical forwarding element isolates the traffic of the VMs of one logical network from the VMs of another logical network that is serviced by another logical forwarding element. A logical forwarding element can connect VMs executing on the same host and/or different hosts.
Through its port 1430 and a NIC driver (not shown), the SFE 1410 connects to the host's PNIC to send outgoing packets and to receive incoming packets. The SFE 1410 performs message-processing operations to forward messages that it receives on one of its ports to another one of its ports. For example, in some embodiments, the SFE tries to use header values in the VM data message to match the message to flow based rules, and upon finding a match, to perform the action specified by the matching rule (e.g., to hand the packet to one of its ports 1430 or 1435, which directs the packet to be supplied to a destination VM or to the PNIC). In some embodiments, the SFE extracts from a data message a virtual network identifier and a MAC address. The SFE in these embodiments uses the extracted VNI to identify a logical port group, and then uses the MAC address to identify a port within the port group.
The SFE ports 1435 in some embodiments include one or more function calls to one or more modules that implement special input/output (I/O) operations on incoming and outgoing packets that are received at the ports. One of these function calls for a port is to a load balancer in the load balancer set 1415. In some embodiments, the load balancer performs the load balancing operations on incoming data messages that are addressed to load balancer's associated VM (e.g., the load balancer's SVM that has to perform a service on the data message). For the embodiments illustrated by
Examples of other I/O operations that are implemented by the ports 1435 include ARP proxy operations, message encapsulation operations (e.g., encapsulation operations needed for sending messages along tunnels to implement overlay logical network operations), etc. By implementing a stack of such function calls, the ports can implement a chain of I/O operations on incoming and/or outgoing messages in some embodiments. Instead of calling the I/O operators (including the load balancer set 1415) from the ports 1435, other embodiments call these operators from the VM's VNIC or from the port 1430 of the SFE.
In some embodiments, a PSN of a SN group is formed by an SVM 1405 and the SVM's associated in-line load balancer 1415. Also, for the embodiments that have an SSN perform a load balancing operation in addition to its service operation, the SSN is formed by an SVM and the SVM's associated in-line load balancer 1415. When an SSN does not perform a load balancing operation to distribute message flows to other service nodes, each SSN is implemented by only an SVM in some embodiments, while other embodiments implement each SSN with an SVM and an load balancer 1415 so that this load balancer can maintain statistics regarding the data message load on the SSN's SVM.
In some embodiments, an SVM's load balancer performs the load balancing operation needed to distribute data messages to its own SVM or to other SVMs in its SN group. When the SVM and the load balancer form a PSN, the PSN's load balancer in some embodiments may one or more of the following operations: (1) directing the controller set to modify SN group membership, (2) supplying statistics to the controller set, (3) receiving global statistics from the controller set, (4) receiving statistics from the SSNs, and (5) providing LBP data (including group membership data) to the SSNs.
When a PSN works with an FLB set, the PSN's load balancer in some embodiments configures (e.g., provides LBP set to) the FLB set, so that the FLB set can perform its load balancing operation to distribute the load amongst the service nodes of the SN group 1300. The PSN's load balancer configures the FLB differently in different embodiments. For instance, in some embodiments, the PSN simply provides the FLB with a list of service nodes in the SN group. In other embodiments, the PSN also provides the FLB with a specific distribution scheme (e.g., a hash lookup table, etc.). In still other embodiments, for each new flow that the FLB sends the PSN, the PSN configures the FLB with the identity of the service node for processing this new flow.
In other embodiments, the PSN's load balancer does not communicate with the controller set, does not send LBP data to its group's SSNs, and/or does not configure the FLB, because some or all of these operations are performed by the LB agent 1420 of the PSN's host. For instance, in some of embodiments, the LB agent 1420 of the host communicates with the controller set (1) to provide statistics regarding its hosts service nodes, and (2) to receive global statistics, group membership updates, and/or membership update confirmations for the SN group of any PSN that executes on its host. Also, in some embodiments, the LB agent 1420 provides the SSNs with LBP data and/or configures the FLB, as further described below.
In some embodiments, each SN group is associated with a VIP address and this address is associated with the SN group's PSN. In some of these embodiments, the load balancer 1415 of the SN group's PSN handles ARP messages that are directed to the group's VIP. In this manner, the initial data messages of new data message flows to the group's VIP will be forwarded to the load balancer of the group's PSN. In other embodiments, the PSN's load balancer does not handle the ARP messages to the group's VIP but another module that executes on the PSN's host handles the ARP messages and this module's response ensures that the initial data messages of new data message flows to the group's VIP are forwarded to the PSN's load balancer. For example, in some embodiments, an ARP proxy module is inserted in the datapath of the PSN's SVM in the same manner as the PSN's load balancer (i.e., the ARP proxy is called by the SVM's VNIC or SFE port). This ARP proxy then responds to the ARP messages for the SN group's VIP address. It should be noted that the ARP message response is disabled on all SSN of the SN group. Also, in some embodiments, the PSN's ARP module (e.g., its load balancer or ARP proxy module) sends out gratuitous ARP replies at the beginning when the service is started on the primary host.
A service node's load balancer 1415 performs its load balancing operations based on the LB rules that are specified in the LB rule storage 1440. For a virtual address (e.g., VIP) of a load balanced group, the LB rule storage 1440 stores a load balancing rule that specifies two or more physical addresses (e.g., MAC addresses) of service nodes of the group to which a data message can be directed. As mentioned above, a PSN's associated load balancer may direct a data message to its SVM or to one or more SVMs that execute on the same host or different hosts. In some embodiments, this load balancing rule also includes load balancing metrics for specifying how the load balancer should bias the spreading of traffic across the service nodes of the group associated with a virtual address.
One example of such load balancing metrics is illustrated in
Each rule's tuple set 1505 includes the VIP address of the rule's associated SN group. In some embodiments, the tuple set 1505 also includes other data message identifiers, such as source IP address, source port, destination port, and protocol. In some embodiments, a load balancer examines a LB data storage by comparing one or more message identifier values (e.g., message five-tuple header values) to the rule tuple sets 1505 to identify a rule that has a tuple set that matches the message identifier values. Also, in some embodiments, the load balancer identifies the location in the data storage 1440 that may contain a potentially matching tuple set for a received data message by generating a hash of the received data message identifier values (e.g., the message five-tuple header values) and using this hash as an index that identifies one or more locations that may store a matching entry. The load balancer then examines the tuple set 1505 at an identified location to determine whether the tuple set 1505 stored at this location matches the received message's identifier values.
In some embodiments, the MAC addresses 1510 of an LB rule are the MAC addresses of the SVMs of the SN group that has the VIP address specified in the rule's tuple set 1505. The weight values 1515 for the MAC addresses of each LB rule provide the criteria for a load balancer to spread the traffic to the SVMs that are identified by the MAC addresses. For instance, in some embodiments, the PSN's load balancer use a weighted round robin scheme to spread the traffic to the SVMs of the load balanced SN group. As one example, assume that the SN group has five SNs (i.e., five SVMs) and the weight values for the MAC addresses of these SNs are 1, 3, 1, 3, and 2. Based on these values, a load balancer would distribute data messages that are part of ten new flows as follows: 1 to the first MAC address, 3 to the second MAC address, 1 to the third MAC address, 3 to the fourth MAC address, and 2 to the fifth MAC address.
When the load balancer 1415 identifies an LB rule for a received data message and then based on the rule's LB criteria identifies an SVM for the data message, the load balancer then replaces the message's original destination MAC address with the identified SVM's MAC address when the message's original destination MAC address is not the identified SVM's MAC address (i.e., is not the MAC address of the load balancer's SVM). The load balancer then sends the data message along its datapath. In some embodiments, this operation entails returning a communication to the SFE port 1435 (that called the load balancer) to let the port know that the load balancer is done with its processing of the data message. The SFE port 1435 can then handoff the data message to the SFE 1410 or can call another I/O chain operator to perform another operation on the data message. Instead of using MAC redirect, the load balancers 1415 of some embodiments perform destination network address translation (DNAT) operations on the received data messages in order to direct the data messages to the correct SVMs. DNAT operations entail replacing the VIP address in the data message with the IP address of the identified SVM.
In some embodiments, the load balancers maintain statistics in the STAT data storage 1445 about the data messages that they direct to their associated SVM. To maintain such statistics for data message load on the SSNs, some embodiments have a load balancer 1415 for each SSN even when the SSNs do not have to distribute message flows to other service nodes. In such cases, other embodiments do not employ a load balancer 1415 for an SSN, but rather have the SSN's SVM maintain such statistics and have the LB agent of the SVM's host obtain these statistics from the SVM.
In some embodiments, the LB agent 1420 periodically supplies to the controller set the statistics that are gathered (e.g., by the load balancers 1415 or the SVMs 1405 of the service nodes) for a SN group and stored in the STAT data storage 1445. In some embodiments, LB agent 1420 generates and updates the LBP set (e.g., load balancing weight values or load balancing hash table) for a SN group with PSNs and/or SSNs on the agent's host. When multiple different SN groups have SVMs and load balancers executing on a host, the host's LB agent 1420 in some embodiments performs some or all of its operations for all of the SN groups that execute on its host. Other embodiments, however, use different LB agents for different SN groups that have SVMs and load balancers executing on the same host.
To gracefully switch between different LBP sets, the LB rules in some embodiments specify time periods for different LBP sets that are valid for different periods of time.
In the example illustrated in
In
As shown in
More specifically, whenever a load balancer identifies an SVM for a data message based on the message's group destination address (e.g., the destination VIP), the load balancer not only may replace the destination MAC address, but also stores a record in the connection state storage 1490 to identify the SVM for subsequent data messages that are part of the same flow. This record stores the MAC address of the identified SVM along with the data message's header values (e.g., source IP address, source port, destination port, destination VIP, protocol). The connection data storage 1490 is hash indexed based on the hash of the data message header values.
Accordingly, to identify an SVM for a received data message, the load balancer first checks the connection state storage 1490 to determine whether it has previously identified an SVM for receiving data messages that are in the same flow or flow hash range as the received message. If so, the load balancer uses the SVM that is identified in the connection state storage. Only when the load balancer does not find a connection record in the connection state storage 1490, the load balancer in some embodiments examines the LB rule storage to try to identify an SVM for the data message.
In
As mentioned above, the LB agent 1420 of some embodiments gathers (e.g., periodically collects) the statistics that the load balancers store in the STATs data storage(s) 1445, and relays these statistics to the controller set. Based on statistics that the controller set gathers from various LB agents of various hosts, the LB controller set in some embodiments (1) distributes the aggregated statistics to each host's LB agent so that each LB agent can define and/or adjust its load balancing parameter set, and/or (2) analyzes the aggregated statistics to specify and distribute some or all of the load balancing parameter set for the load balancers to enforce. In some embodiments where the LB agent receives new load balancing parameter set from the LB controller set, the LB agent stores the parameter set in the host-level LB rule storage 1488 for propagation to the LB rule storage(s) 1440.
In the embodiment where the LB agent receives aggregated statistics from the LB controller set, the LB agent stores the aggregated statistics in the global statistics data storage 1486. In some embodiments, the LB agent 1420 analyzes the aggregated statistics in this storage 1486 to define and/or adjust the LBP set (e.g., weight values or hash lookup tables), which it then stores in the LB rule storage 1488 for propagation to the LB rule storage(s) 1440. The publisher 1422 retrieves each LB rule that the LB agent 1420 stores in the LB rule storage 1488, and stores the retrieved rule in the LB rule storage 1440 of the load balancer 1415 that needs to enforce this rule.
The LB agent 1420 not only propagates LB rule updates based on newly received aggregated statistics, but it also propagates LB rules or updates LB rules based on updates to SN groups. In some embodiments, the controller set updates the SN group. In other embodiments, the SN group's PSN modifies the SN group. In still other embodiments, the controller set updates the SN group at the direction of the group's PSN (e.g., at the direction of the LB agent 1420 or the load balancer 1415 of the PSN SVM of the SN group).
The LB agent 1420 stores each SN group's members in the group data storage 1484. When a SN is added to or removed from a SN group, the LB agent 1420 of some embodiments stores this update in the group storage 1484, and then formulates updates to the LB rules to add or remove the destination address of this SN from the LB rules that should include or already include this address. Again, the LB agent 1420 stores such updated rules in the rule data storage 1488, from where the publisher propagates them to the LB rule storage(s) 1440 of the load balancers that need to enforce these rules.
In some embodiments, the LB agent 1420 stores in the policy storage 1482, LB policies that direct the operation of the LB agent in response to newly provisioned SVMs and their associated load balancers, and/or in response to updated global statistics and/or adjusted SN group membership. The policies in the policy storage 1482 in some embodiments are supplied by the controller set.
At 1710, the process 1700 determines whether the received update includes an update to the membership of at least one SN group for which the LB agents generates and/or maintains the LB rules. In some embodiments, the PSN's load balancer 1415 or the LB agent 1420 direct the controller set to instantiate a new SVM for the SN group or to allocate a previously instantiated SVM to the SN group, when the load balancer 1415 or the LB agent 1420 determine that a new service node should be added to the SN group. Similarly, when the load balancer 1415 or the LB agent 1420 determine that the SN group should shrink, the load balancer 1415 or the LB agent 1420 direct the controller set to remove one or more SVMs from the SN group. Thus, in these embodiments, the received group update is in response to a group adjustment request from the load balancer 1415 or the LB agent 1420.
When the process determines (at 1710) that the received update does not include a membership update, the process transitions to 1720. Otherwise, the process creates and/or updates (at 1715) one or more records in the group membership storage 1484 to store the updated group membership that the process received at 1705. From 1715, the process transitions to 1720.
At 1720, the process 1700 determines whether the received update includes updated statistics for at least one SN group for which the LB agents generates and/or maintains the LB rules. If not, the process transitions to 1730. Otherwise, the process creates and/or updates (at 1725) one or more records in the global statistics storage 1486 to store the updated global statistics that the process received at 1705. From 1725, the process transitions to 1730.
At 1730, the process initiates a process to analyze the updated records in the group membership storage 1484 and/or the global statistics storage 1486 to update the group memberships (e.g., the IP addresses) and/or the load balancing parameter set (e.g., the weight values or hash lookup table) of one or more LB rules in the host-level LB rule data storage 1488. In some embodiments, the policies that are stored in the policy storage 1482 control how the LB agent 1420 updates the LB rules based on the updated group membership record(s) and/or the updated global statistics. In some embodiments, the LB agent performs an identical or similar process (1) when the LB agent powers up (e.g., when its host powers up) to configure the LB rules of the load balancers on the host, and (2) when a new SVM 1405 is instantiated on the host and the LB agent needs to configure the LB rules of the instantiated SVM's associated load balancer 1415.
In different embodiments, the process 1700 updates (at 1730) the load balancing parameter set differently. For instance, in some embodiments, the process updates weight values and/or time values for load balancing criteria, and/or updates the service nodes for one or more weight values. In other embodiments, the process updates hash tables by modifying hash ranges, adding new hash ranges, and/or specifying new service nodes for new or previous hash ranges. As mentioned before, multiple contiguous or non-contiguous hash ranges in some embodiments can map to the same service node. In some embodiments, updates to the hash table re-assign a hash range from one service node to another service node.
From the host-level LB rule data storage 1488, the publisher 1422 propagates each new or updated LB rule to the LB rule data storages 1440 of the individual load balancers 1415 (on the same host) that need to process the new or updated LB rule. In publishing each new or updated LB rule, the publisher 1422 does not publish the LB rule to the rule data storage 1440 of a load balancer (on the same host) that does not need to process the rule.
In some embodiments, the updated LB rules also have to be supplied the load balancers of the SSNs. In some of these embodiments, the updated LB rules are distributed by the LB agent 1420 or publisher 1422 of the PSN's host to the host-level data storage 1488 of other hosts that execute SSNs of the PSN's SN group. In other embodiments, however, the LB agent 1420 on these other hosts follows the same LB policies to generate the same LB rule updates on these other hosts, and the publisher on these hosts pushes these updated LB rules to the LB rule data storages 1440 of the SSNs' load balancers 1415. Accordingly, in these embodiments, the updated rules do not need to be distributed from the PSN's host to the hosts that execute SSNs of the PSN's SN group.
After 1730, the process 1700 ends.
As shown, the process 1800 initially analyzes (at 1805) the data message load on the service nodes of the SN group. Next, at 1810, the process determines whether the SN group membership should be updated in view of analyzed message load data. In some embodiments, when the message load on the SN group as a whole exceeds a first threshold, the process determines (at 1810) that a service node should be added to the SN group. In other embodiments, the process decides (at 1810) to add a service node to the SN group when the message load on one or more service nodes in the SN group exceeds the first threshold.
Conversely, the process determines (at 1810) to remove a service node from the SN group when it determines that the message load on the SN group as a whole, or on one or more service nodes individually, is below a second threshold value. The second threshold value is different than the first threshold value in some embodiments, while it is the same as the first threshold value in other embodiments. Several examples for quantifying message load (for comparison to threshold values) were described above. These examples include metrics such as number of data message flows currently being processed, number of data messages processed within a particular time period, number of payload bytes in the processed messages, etc. For these examples, the threshold values can similarly be quantified in terms of these metrics.
When the process determines (at 1810) that it does not need to adjust the group membership, the process ends. Otherwise, the process transitions to 1815, where it performs the set of operations for adding one or more service nodes to, or removing one or more service nodes from, the SN group. In some embodiments, the sequence of operations for adding a service node is the same as the sequence of operations for removing a service node.
In other embodiments, these two sequences are not similar. For instance, in some embodiments, to add a service node, the process 1800 initially directs the controller set to add the service node, and then after receiving notification from the controller set regarding the addition of the service node, the process updates the load balancing rules of the PSN, and when applicable, the SSNs and FLBs. On the other hand, the process 1800 of some embodiments removes a service node by (1) initially directing the PSN (and when applicable, the SSNs and FLBs) to stop sending new flows to the service node, and then (2) after a transient delay or a sufficient reduction in the usage of the service node, directing the controller set to remove the service node from the SN group.
In still other embodiments, the process 1800 can follow other sequences of operations to add a service node to, or remove a service node from, the SN group. Also, in other embodiments, the PSN's LB agent does not perform the elastic adjustment process 1800. For instance, in some embodiments, the PSN's load balancer performs this process. In other embodiments, the controller set performs this process.
The process 1900 in some embodiments receives the group membership updates from another process of the controller set. For instance, in some embodiments, a virtualization manager informs the process 1900 that a new SVM has been added to an SN group when a new SVM has been created for the SN group, or has been removed from the SN group when the SVM has been terminated or has failed in the SN group. In some embodiments, the virtualization manager instantiates a new SVM or allocates a previously instantiated SVM to the SN group at the behest of the process 1900, as further described below.
At 1910, the process updates (1) the global statistics that the controller set maintains for the SN group based on the statistics received at 1905, and/or (2) the SN group's membership that the controller set maintains based on the group updates received at 1905. Next, at 1915, the process determines based on the updated statistics whether it should have one or more SVM specified or removed for the group. For instance, when the updated statistics causes the aggregated statistics for the SN group to exceed a threshold load value for one or more SNs in the group, the process 1900 determines that one or more new SVMs have to be specified (e.g., allotted or instantiated) for the SN group to reduce the load on SVMs previously specified for the group. Conversely, when the updated statistics shows that a SVM in a SN group is being underutilized or is no longer being used to handle any flows, the process 1900 determines (at 1915) that the SVM has to be removed for the SN group. In some embodiments, process 1900 also determines that SN group membership should be modified when it receives such a request from the PSN (e.g., through the PSN's LB agent or load balancer).
When the process 1900 determines (at 1915) that it should have one or more SVMs added to or removed for the group, the process requests (at 1920) one or more virtualization manager to add or remove the SVM(s), and then transitions to 1925. In some embodiments, a virtualization manager is a process that one or more controllers in the controller set execute, while in other embodiments, the virtualization manager is a process that is executed by one or more servers that are outside of the controller set that handles the LB data collection and data distribution.
The process 1900 also transitions to 1925 when it determines (at 1915) that no SVM needs to be added to or removed from the SN group. At 1925, the process determines whether the time has reached for it to distribute membership update and/or global statistics to one or more LB agents executing on one or more hosts. In some embodiments, the process 1900 distributes membership updates and/or global statistics on a periodic basis. In other embodiments, however, the process 1900 distributes membership update and/or global statistics for the SN group whenever this data is modified. Also, in some embodiments, the process 1900 distributes updated statistics and/or group membership to only the LB agent of the SN group's PSN, while in other embodiments, the process distributes the updated statistics and/or group membership to the LB agent of each host that executes the SVM of the PSN and/or an SSN of the group. In the embodiments where the process distributes statistic and membership updates to only the LB agent of the group's PSN, one or more modules on the PSN's host distribute the updated LB rules and/or group membership to the SSNs if the SSNs need such data.
When the process determines (at 1925) that it does not need to distribute new data, it transitions to 1930 to determine whether it has received any more statistic and/or membership updates for which it needs to update its records. If so, the process transitions back to 1910 to process the newly received statistic and/or membership updates. If not, the process transitions back to 1925 to determine again whether it should distribute new data to one or more LB agents.
When the process determines (at 1925) that should distribute membership update(s) and/or global statistics, it distributes (at 1935) this data to one or more LB agents that need to process this data to specify and/or update the load balancing rules that they maintain for their load balancers on their hosts. After 1935, the process determines (at 1940) whether it has received any more statistic and/or membership updates for which it needs to update its records. If not, the process remains at 1940 until it receives statistics and/or membership updates, at which time it transitions back to 1910 to process the newly received statistic and/or membership updates.
In the embodiments described above by reference to
The elastic SN groups of some embodiments are used to elastically provide services (e.g., load balancing, firewall, etc.) at the edge of a network.
In this example, the first load balancing layer 2020 is implemented by using an elastic load balancing group of some embodiments of the invention. This elastic group is identical to the group that was described above by reference to
Also, in this example, the second and third layers 2025 and 2030 of load balancers are implemented by the inline load balancers of the web server and application server VMs of some embodiments. In some embodiments, these inline load balancers are implemented like the load balancers 1415 of
Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer readable storage medium (also referred to as computer readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.
In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.
The bus 2105 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 2100. For instance, the bus 2105 communicatively connects the processing unit(s) 2110 with the read-only memory 2130, the system memory 2125, and the permanent storage device 2135.
From these various memory units, the processing unit(s) 2110 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 2130 stores static data and instructions that are needed by the processing unit(s) 2110 and other modules of the computer system. The permanent storage device 2135, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 2100 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 2135.
Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device. Like the permanent storage device 2135, the system memory 2125 is a read-and-write memory device. However, unlike storage device 2135, the system memory is a volatile read-and-write memory, such a random access memory. The system memory stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 2125, the permanent storage device 2135, and/or the read-only memory 2130. From these various memory units, the processing unit(s) 2110 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.
The bus 2105 also connects to the input and output devices 2140 and 2145. The input devices enable the user to communicate information and select commands to the computer system. The input devices 2140 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 2145 display images generated by the computer system. The output devices include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as a touchscreen that function as both input and output devices.
Finally, as shown in
Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.
While the above discussion primarily refers to microprocessor or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application specific integrated circuits (ASICs) or field programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.
As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms display or displaying means displaying on an electronic device. As used in this specification, the terms “computer readable medium,” “computer readable media,” and “machine readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.
While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, this specification refers throughout to computational and network environments that include virtual machines (VMs). However, virtual machines are merely one example of data compute nodes (DCNs) or data compute end nodes, also referred to as addressable nodes. DCNs may include non-virtualized physical hosts, virtual machines, containers that run on top of a host operating system without the need for a hypervisor or separate operating system, and hypervisor kernel network interface modules.
VMs, in some embodiments, operate with their own guest operating systems on a host using resources of the host virtualized by virtualization software (e.g., a hypervisor, virtual machine monitor, etc.). The tenant (i.e., the owner of the VM) can choose which applications to operate on top of the guest operating system. Some containers, on the other hand, are constructs that run on top of a host operating system without the need for a hypervisor or separate guest operating system. In some embodiments, the host operating system uses name spaces to isolate the containers from each other and therefore provides operating-system level segregation of the different groups of applications that operate within different containers. This segregation is akin to the VM segregation that is offered in hypervisor-virtualized environments that virtualize system hardware, and thus can be viewed as a form of virtualization that isolates different groups of applications that operate in different containers. Such containers are more lightweight than VMs.
Hypervisor kernel network interface module, in some embodiments, is a non-VM DCN that includes a network stack with a hypervisor kernel network interface and receive/transmit threads. One example of a hypervisor kernel network interface module is the vmknic module that is part of the ESXi™ hypervisor of VMware, Inc.
One of ordinary skill in the art will recognize that while the specification refers to VMs, the examples given could be any type of DCNs, including physical hosts, VMs, non-VM containers, and hypervisor kernel network interface modules. In fact, the example networks could include combinations of different types of DCNs in some embodiments.
A number of the figures (e.g.,
Number | Name | Date | Kind |
---|---|---|---|
6006264 | Colby et al. | Dec 1999 | A |
6104700 | Haddock et al. | Aug 2000 | A |
6154448 | Petersen et al. | Nov 2000 | A |
6772211 | Lu et al. | Aug 2004 | B2 |
6779030 | Dugan et al. | Aug 2004 | B1 |
6826694 | Dutta et al. | Nov 2004 | B1 |
6880089 | Bommareddy et al. | Apr 2005 | B1 |
6985956 | Luke et al. | Jan 2006 | B2 |
7013389 | Srivastava et al. | Mar 2006 | B1 |
7209977 | Acharya et al. | Apr 2007 | B2 |
7239639 | Cox et al. | Jul 2007 | B2 |
7379465 | Aysan et al. | May 2008 | B2 |
7406540 | Acharya et al. | Jul 2008 | B2 |
7447775 | Zhu et al. | Nov 2008 | B1 |
7480737 | Chauffour et al. | Jan 2009 | B2 |
7487250 | Siegel | Feb 2009 | B2 |
7499463 | Droux et al. | Mar 2009 | B1 |
7649890 | Mizutani et al. | Jan 2010 | B2 |
7698458 | Liu et al. | Apr 2010 | B1 |
7818452 | Matthews et al. | Oct 2010 | B2 |
7898959 | Arad | Mar 2011 | B1 |
7921174 | Denise | Apr 2011 | B1 |
7948986 | Ghosh et al. | May 2011 | B1 |
8078903 | Parthasarathy et al. | Dec 2011 | B1 |
8094575 | Vadlakonda et al. | Jan 2012 | B1 |
8175863 | Ostermeyer et al. | May 2012 | B1 |
8190767 | Maufer et al. | May 2012 | B1 |
8201219 | Jones | Jun 2012 | B2 |
8223634 | Tanaka et al. | Jul 2012 | B2 |
8224885 | Doucette et al. | Jul 2012 | B1 |
8230493 | Davidson et al. | Jul 2012 | B2 |
8266261 | Akagi | Sep 2012 | B2 |
8339959 | Moisand et al. | Dec 2012 | B1 |
8451735 | Li | May 2013 | B2 |
8484348 | Subramanian et al. | Jul 2013 | B2 |
8488577 | Macpherson | Jul 2013 | B1 |
8521879 | Pena et al. | Aug 2013 | B1 |
8615009 | Ramamoorthi et al. | Dec 2013 | B1 |
8707383 | Bade et al. | Apr 2014 | B2 |
8738702 | Belanger et al. | May 2014 | B1 |
8743885 | Khan et al. | Jun 2014 | B2 |
8804720 | Rainovic et al. | Aug 2014 | B1 |
8804746 | Wu et al. | Aug 2014 | B2 |
8811412 | Shippy | Aug 2014 | B2 |
8830834 | Sharma et al. | Sep 2014 | B2 |
8832683 | Heim | Sep 2014 | B2 |
8849746 | Candea et al. | Sep 2014 | B2 |
8856518 | Sridharan et al. | Oct 2014 | B2 |
8862883 | Cherukur et al. | Oct 2014 | B2 |
8868711 | Skjolsvold et al. | Oct 2014 | B2 |
8873399 | Bothos et al. | Oct 2014 | B2 |
8874789 | Zhu | Oct 2014 | B1 |
8892706 | Dalal | Nov 2014 | B1 |
8914406 | Haugsnes et al. | Dec 2014 | B1 |
8971345 | McCanne et al. | Mar 2015 | B1 |
8989192 | Foo et al. | Mar 2015 | B2 |
8996610 | Sureshchandra et al. | Mar 2015 | B1 |
9009289 | Jacob | Apr 2015 | B1 |
9094464 | Scharber et al. | Jul 2015 | B1 |
9104497 | Mortazavi | Aug 2015 | B2 |
9148367 | Kandaswamy et al. | Sep 2015 | B2 |
9178709 | Higashida et al. | Nov 2015 | B2 |
9191293 | Iovene et al. | Nov 2015 | B2 |
9203748 | Jiang et al. | Dec 2015 | B2 |
9225638 | Jain et al. | Dec 2015 | B2 |
9225659 | McCanne et al. | Dec 2015 | B2 |
9232342 | Seed et al. | Jan 2016 | B2 |
9237098 | Patel et al. | Jan 2016 | B2 |
9256467 | Singh et al. | Feb 2016 | B1 |
9258742 | Pianigiani et al. | Feb 2016 | B1 |
9264313 | Manuguri et al. | Feb 2016 | B1 |
9277412 | Freda et al. | Mar 2016 | B2 |
9397946 | Yadav | Jul 2016 | B1 |
9407540 | Kumar et al. | Aug 2016 | B2 |
9407599 | Koponen et al. | Aug 2016 | B2 |
9442752 | Roth et al. | Sep 2016 | B1 |
9479358 | Klosowski et al. | Oct 2016 | B2 |
9503530 | Niedzielski | Nov 2016 | B1 |
9531590 | Jain et al. | Dec 2016 | B2 |
9577845 | Thakkar et al. | Feb 2017 | B2 |
9602380 | Strassner | Mar 2017 | B2 |
9686192 | Sengupta et al. | Jun 2017 | B2 |
9686200 | Pettit et al. | Jun 2017 | B2 |
9705702 | Foo et al. | Jul 2017 | B2 |
9705775 | Zhang et al. | Jul 2017 | B2 |
9755898 | Jain et al. | Sep 2017 | B2 |
9755971 | Wang et al. | Sep 2017 | B2 |
9774537 | Jain et al. | Sep 2017 | B2 |
9787559 | Schroeder | Oct 2017 | B1 |
9787605 | Zhang et al. | Oct 2017 | B2 |
9804797 | Ng et al. | Oct 2017 | B1 |
9825810 | Jain et al. | Nov 2017 | B2 |
9860079 | Cohn et al. | Jan 2018 | B2 |
9900410 | Dalal | Feb 2018 | B2 |
9935827 | Jain et al. | Apr 2018 | B2 |
9979641 | Jain et al. | May 2018 | B2 |
9985896 | Koponen et al. | May 2018 | B2 |
9996380 | Singh et al. | Jun 2018 | B2 |
10013276 | Fahs et al. | Jul 2018 | B2 |
10042722 | Chigurupati et al. | Aug 2018 | B1 |
10075470 | Vaidya et al. | Sep 2018 | B2 |
10079779 | Zhang et al. | Sep 2018 | B2 |
10084703 | Kumar et al. | Sep 2018 | B2 |
10091276 | Bloomquist et al. | Oct 2018 | B2 |
10104169 | Moniz et al. | Oct 2018 | B1 |
10129077 | Jain et al. | Nov 2018 | B2 |
10129180 | Zhang et al. | Nov 2018 | B2 |
10135636 | Jiang et al. | Nov 2018 | B2 |
10135737 | Jain et al. | Nov 2018 | B2 |
10158573 | Lee et al. | Dec 2018 | B1 |
10187306 | Nainar et al. | Jan 2019 | B2 |
10200493 | Bendapudi et al. | Feb 2019 | B2 |
10212071 | Kancherla et al. | Feb 2019 | B2 |
10225137 | Jain et al. | Mar 2019 | B2 |
10237379 | Kumar et al. | Mar 2019 | B2 |
10250501 | Ni | Apr 2019 | B2 |
10257095 | Jain et al. | Apr 2019 | B2 |
10284390 | Kumar et al. | May 2019 | B2 |
10305822 | Tao et al. | May 2019 | B2 |
10320679 | Jain et al. | Jun 2019 | B2 |
10333822 | Jeuk et al. | Jun 2019 | B1 |
10341233 | Jain et al. | Jul 2019 | B2 |
10341427 | Jalan et al. | Jul 2019 | B2 |
10375155 | Cai et al. | Aug 2019 | B1 |
10397275 | Jain et al. | Aug 2019 | B2 |
10445509 | Thota et al. | Oct 2019 | B2 |
10484334 | Lee et al. | Nov 2019 | B1 |
10516568 | Jain et al. | Dec 2019 | B2 |
10547508 | Kanakarajan | Jan 2020 | B1 |
10554484 | Chanda et al. | Feb 2020 | B2 |
10594743 | Hong et al. | Mar 2020 | B2 |
10609091 | Hong et al. | Mar 2020 | B2 |
10609122 | Argenti et al. | Mar 2020 | B1 |
10623309 | Gampel et al. | Apr 2020 | B1 |
10637750 | Bollineni et al. | Apr 2020 | B1 |
10645060 | Ao et al. | May 2020 | B2 |
10645201 | Mishra et al. | May 2020 | B2 |
10659252 | Boutros et al. | May 2020 | B2 |
10693782 | Jain et al. | Jun 2020 | B2 |
10708229 | Sevinc et al. | Jul 2020 | B2 |
10728174 | Boutros et al. | Jul 2020 | B2 |
10757077 | Rajahalme et al. | Aug 2020 | B2 |
10812378 | Nainar et al. | Oct 2020 | B2 |
10834004 | Yigit et al. | Nov 2020 | B2 |
10853111 | Gupta et al. | Dec 2020 | B1 |
10938668 | Zulak et al. | Mar 2021 | B1 |
10938716 | Chin et al. | Mar 2021 | B1 |
10997177 | Howes et al. | May 2021 | B1 |
11026047 | Greenberger et al. | Jun 2021 | B2 |
11055273 | Meduri et al. | Jul 2021 | B1 |
11075839 | Zhuang et al. | Jul 2021 | B2 |
11153190 | Mahajan et al. | Oct 2021 | B1 |
11157304 | Watt, Jr. et al. | Oct 2021 | B2 |
11184397 | Annadata et al. | Nov 2021 | B2 |
11316900 | Schottland et al. | Apr 2022 | B1 |
11398983 | Wijnands et al. | Jul 2022 | B2 |
11528213 | Venkatasubbaiah et al. | Dec 2022 | B2 |
20020010783 | Primak et al. | Jan 2002 | A1 |
20020078370 | Tahan | Jun 2002 | A1 |
20020097724 | Halme et al. | Jul 2002 | A1 |
20020194350 | Lu et al. | Dec 2002 | A1 |
20030065711 | Acharya et al. | Apr 2003 | A1 |
20030093481 | Mitchell et al. | May 2003 | A1 |
20030097429 | Wu et al. | May 2003 | A1 |
20030105812 | Flowers et al. | Jun 2003 | A1 |
20030188026 | Denton et al. | Oct 2003 | A1 |
20030236813 | Abjanic | Dec 2003 | A1 |
20040066769 | Ahmavaara et al. | Apr 2004 | A1 |
20040210670 | Anerousis et al. | Oct 2004 | A1 |
20040215703 | Song et al. | Oct 2004 | A1 |
20040249776 | Horvitz et al. | Dec 2004 | A1 |
20050021713 | Dugan et al. | Jan 2005 | A1 |
20050089327 | Ovadia et al. | Apr 2005 | A1 |
20050091396 | Nilakantan et al. | Apr 2005 | A1 |
20050114429 | Caccavale | May 2005 | A1 |
20050114648 | Akundi et al. | May 2005 | A1 |
20050132030 | Hopen et al. | Jun 2005 | A1 |
20050198200 | Subramanian et al. | Sep 2005 | A1 |
20050249199 | Albert et al. | Nov 2005 | A1 |
20060069776 | Shim et al. | Mar 2006 | A1 |
20060112297 | Davidson | May 2006 | A1 |
20060130133 | Andreev | Jun 2006 | A1 |
20060155862 | Kathi et al. | Jul 2006 | A1 |
20060195896 | Fulp et al. | Aug 2006 | A1 |
20060233155 | Srivastava | Oct 2006 | A1 |
20070061492 | Riel | Mar 2007 | A1 |
20070121615 | Weill et al. | May 2007 | A1 |
20070153782 | Fletcher et al. | Jul 2007 | A1 |
20070214282 | Sen | Sep 2007 | A1 |
20070248091 | Khalid et al. | Oct 2007 | A1 |
20070260750 | Feied et al. | Nov 2007 | A1 |
20070288615 | Keohane et al. | Dec 2007 | A1 |
20070291773 | Khan et al. | Dec 2007 | A1 |
20080005293 | Bhargava et al. | Jan 2008 | A1 |
20080031263 | Ervin et al. | Feb 2008 | A1 |
20080046400 | Shi | Feb 2008 | A1 |
20080049614 | Briscoe et al. | Feb 2008 | A1 |
20080049619 | Twiss | Feb 2008 | A1 |
20080049786 | Ram et al. | Feb 2008 | A1 |
20080072305 | Casado et al. | Mar 2008 | A1 |
20080084819 | Parizhsky et al. | Apr 2008 | A1 |
20080095153 | Fukunaga et al. | Apr 2008 | A1 |
20080104608 | Hyser et al. | May 2008 | A1 |
20080195755 | Lu et al. | Aug 2008 | A1 |
20080225714 | Denis | Sep 2008 | A1 |
20080239991 | Applegate et al. | Oct 2008 | A1 |
20080247396 | Hazard | Oct 2008 | A1 |
20080276085 | Davidson et al. | Nov 2008 | A1 |
20080279196 | Friskney et al. | Nov 2008 | A1 |
20090003349 | Havemann et al. | Jan 2009 | A1 |
20090003364 | Fendick et al. | Jan 2009 | A1 |
20090003375 | Havemann et al. | Jan 2009 | A1 |
20090019135 | Eswaran et al. | Jan 2009 | A1 |
20090037713 | Khalid et al. | Feb 2009 | A1 |
20090063706 | Goldman | Mar 2009 | A1 |
20090129271 | Ramankutty et al. | May 2009 | A1 |
20090172666 | Yahalom et al. | Jul 2009 | A1 |
20090190506 | Belling et al. | Jul 2009 | A1 |
20090199268 | Ahmavaara et al. | Aug 2009 | A1 |
20090235325 | Dimitrakos et al. | Sep 2009 | A1 |
20090238084 | Nadeau et al. | Sep 2009 | A1 |
20090249472 | Litvin et al. | Oct 2009 | A1 |
20090265467 | Peles | Oct 2009 | A1 |
20090271586 | Shaath | Oct 2009 | A1 |
20090299791 | Blake et al. | Dec 2009 | A1 |
20090300210 | Ferris | Dec 2009 | A1 |
20090303880 | Maltz et al. | Dec 2009 | A1 |
20090307334 | Maltz et al. | Dec 2009 | A1 |
20090327464 | Archer et al. | Dec 2009 | A1 |
20100031360 | Seshadri et al. | Feb 2010 | A1 |
20100036903 | Ahmad et al. | Feb 2010 | A1 |
20100100616 | Bryson et al. | Apr 2010 | A1 |
20100131638 | Kondamuru | May 2010 | A1 |
20100165985 | Sharma et al. | Jul 2010 | A1 |
20100223364 | Wei | Sep 2010 | A1 |
20100223621 | Joshi et al. | Sep 2010 | A1 |
20100235915 | Memon et al. | Sep 2010 | A1 |
20100254385 | Sharma et al. | Oct 2010 | A1 |
20100257278 | Gunturu | Oct 2010 | A1 |
20100265824 | Chao et al. | Oct 2010 | A1 |
20100281482 | Pike et al. | Nov 2010 | A1 |
20100332595 | Fullagar et al. | Dec 2010 | A1 |
20110010578 | Dominguez et al. | Jan 2011 | A1 |
20110016348 | Pace et al. | Jan 2011 | A1 |
20110022695 | Dalal et al. | Jan 2011 | A1 |
20110022812 | Van Der Linden et al. | Jan 2011 | A1 |
20110035494 | Pandey et al. | Feb 2011 | A1 |
20110040893 | Karaoguz et al. | Feb 2011 | A1 |
20110055845 | Nandagopal | Mar 2011 | A1 |
20110058563 | Saraph et al. | Mar 2011 | A1 |
20110090912 | Shippy | Apr 2011 | A1 |
20110164504 | Bothos et al. | Jul 2011 | A1 |
20110194563 | Shen et al. | Aug 2011 | A1 |
20110211463 | Matityahu et al. | Sep 2011 | A1 |
20110225293 | Rathod | Sep 2011 | A1 |
20110235508 | Goel et al. | Sep 2011 | A1 |
20110261811 | Battestilli | Oct 2011 | A1 |
20110268118 | Schlansker et al. | Nov 2011 | A1 |
20110271007 | Wang et al. | Nov 2011 | A1 |
20110276695 | Maldaner | Nov 2011 | A1 |
20110283013 | Grosser et al. | Nov 2011 | A1 |
20110295991 | Aida | Dec 2011 | A1 |
20110317708 | Clark | Dec 2011 | A1 |
20120005265 | Ushioda et al. | Jan 2012 | A1 |
20120011281 | Hamada et al. | Jan 2012 | A1 |
20120014386 | Xiong et al. | Jan 2012 | A1 |
20120023231 | Ueno | Jan 2012 | A1 |
20120054266 | Kazerani et al. | Mar 2012 | A1 |
20120089664 | Igelka | Apr 2012 | A1 |
20120137004 | Smith | May 2012 | A1 |
20120140719 | Hui et al. | Jun 2012 | A1 |
20120144014 | Natham et al. | Jun 2012 | A1 |
20120147894 | Mulligan et al. | Jun 2012 | A1 |
20120155266 | Patel et al. | Jun 2012 | A1 |
20120176932 | Wu et al. | Jul 2012 | A1 |
20120185588 | Error | Jul 2012 | A1 |
20120195196 | Ghai et al. | Aug 2012 | A1 |
20120207174 | Shieh | Aug 2012 | A1 |
20120213074 | Goldfarb et al. | Aug 2012 | A1 |
20120230187 | Tremblay et al. | Sep 2012 | A1 |
20120239804 | Liu et al. | Sep 2012 | A1 |
20120246637 | Kreeger et al. | Sep 2012 | A1 |
20120266252 | Spiers et al. | Oct 2012 | A1 |
20120281540 | Khan et al. | Nov 2012 | A1 |
20120287789 | Aybay et al. | Nov 2012 | A1 |
20120303784 | Zisapel et al. | Nov 2012 | A1 |
20120303809 | Patel et al. | Nov 2012 | A1 |
20120311568 | Jansen | Dec 2012 | A1 |
20120317260 | Husain et al. | Dec 2012 | A1 |
20120317570 | Dalcher et al. | Dec 2012 | A1 |
20120331188 | Riordan et al. | Dec 2012 | A1 |
20130003735 | Chao et al. | Jan 2013 | A1 |
20130021942 | Bacthu et al. | Jan 2013 | A1 |
20130031544 | Sridharan et al. | Jan 2013 | A1 |
20130039218 | Narasimhan et al. | Feb 2013 | A1 |
20130044636 | Koponen et al. | Feb 2013 | A1 |
20130058346 | Sridharan et al. | Mar 2013 | A1 |
20130073743 | Ramasamy et al. | Mar 2013 | A1 |
20130100851 | Bacthu et al. | Apr 2013 | A1 |
20130125120 | Zhang et al. | May 2013 | A1 |
20130136126 | Wang et al. | May 2013 | A1 |
20130142048 | Gross, IV et al. | Jun 2013 | A1 |
20130148505 | Koponen et al. | Jun 2013 | A1 |
20130151661 | Koponen et al. | Jun 2013 | A1 |
20130159487 | Patel et al. | Jun 2013 | A1 |
20130160024 | Shtilman et al. | Jun 2013 | A1 |
20130163594 | Sharma et al. | Jun 2013 | A1 |
20130166703 | Hammer et al. | Jun 2013 | A1 |
20130170501 | Egi et al. | Jul 2013 | A1 |
20130201989 | Hu | Aug 2013 | A1 |
20130227097 | Yasuda et al. | Aug 2013 | A1 |
20130227550 | Weinstein et al. | Aug 2013 | A1 |
20130287026 | Davie | Oct 2013 | A1 |
20130287036 | Banavalikar et al. | Oct 2013 | A1 |
20130291088 | Shieh et al. | Oct 2013 | A1 |
20130297798 | Arisoylu et al. | Nov 2013 | A1 |
20130301472 | Allan | Nov 2013 | A1 |
20130311637 | Kamath et al. | Nov 2013 | A1 |
20130318219 | Kancherla | Nov 2013 | A1 |
20130322446 | Biswas et al. | Dec 2013 | A1 |
20130332983 | Koorevaar et al. | Dec 2013 | A1 |
20130336319 | Liu et al. | Dec 2013 | A1 |
20130343174 | Guichard et al. | Dec 2013 | A1 |
20130343378 | Veteikis et al. | Dec 2013 | A1 |
20140003232 | Guichard et al. | Jan 2014 | A1 |
20140003422 | Mogul et al. | Jan 2014 | A1 |
20140010085 | Kavunder et al. | Jan 2014 | A1 |
20140029447 | Schrum, Jr. | Jan 2014 | A1 |
20140046997 | Dain et al. | Feb 2014 | A1 |
20140046998 | Dain et al. | Feb 2014 | A1 |
20140052844 | Nayak et al. | Feb 2014 | A1 |
20140059204 | Nguyen et al. | Feb 2014 | A1 |
20140059544 | Koganty et al. | Feb 2014 | A1 |
20140068602 | Gember et al. | Mar 2014 | A1 |
20140092738 | Grandhi et al. | Apr 2014 | A1 |
20140092914 | Kondapalli | Apr 2014 | A1 |
20140096183 | Jain et al. | Apr 2014 | A1 |
20140101226 | Khandekar et al. | Apr 2014 | A1 |
20140101656 | Zhu et al. | Apr 2014 | A1 |
20140108665 | Arora et al. | Apr 2014 | A1 |
20140115578 | Cooper et al. | Apr 2014 | A1 |
20140129715 | Mortazavi | May 2014 | A1 |
20140149696 | Frenkel et al. | May 2014 | A1 |
20140164477 | Springer et al. | Jun 2014 | A1 |
20140169168 | Jalan et al. | Jun 2014 | A1 |
20140169375 | Khan et al. | Jun 2014 | A1 |
20140195666 | Dumitriu et al. | Jul 2014 | A1 |
20140207968 | Kumar et al. | Jul 2014 | A1 |
20140254374 | Janakiraman et al. | Sep 2014 | A1 |
20140254591 | Mahadevan et al. | Sep 2014 | A1 |
20140269487 | Kalkunte | Sep 2014 | A1 |
20140269717 | Thubert et al. | Sep 2014 | A1 |
20140269724 | Mehler et al. | Sep 2014 | A1 |
20140280896 | Papakostas et al. | Sep 2014 | A1 |
20140281029 | Danforth | Sep 2014 | A1 |
20140282526 | Basavaiah et al. | Sep 2014 | A1 |
20140301388 | Jagadish et al. | Oct 2014 | A1 |
20140304231 | Kamath et al. | Oct 2014 | A1 |
20140307744 | Dunbar et al. | Oct 2014 | A1 |
20140310391 | Sorenson et al. | Oct 2014 | A1 |
20140310418 | Sorenson, III et al. | Oct 2014 | A1 |
20140317677 | Vaidya et al. | Oct 2014 | A1 |
20140321459 | Kumar et al. | Oct 2014 | A1 |
20140330983 | Zisapel | Nov 2014 | A1 |
20140334485 | Jain et al. | Nov 2014 | A1 |
20140334488 | Guichard et al. | Nov 2014 | A1 |
20140341029 | Allan et al. | Nov 2014 | A1 |
20140351452 | Bosch et al. | Nov 2014 | A1 |
20140362682 | Guichard et al. | Dec 2014 | A1 |
20140362705 | Pan | Dec 2014 | A1 |
20140369204 | Anand et al. | Dec 2014 | A1 |
20140372567 | Ganesh et al. | Dec 2014 | A1 |
20140372616 | Arisoylu et al. | Dec 2014 | A1 |
20140372702 | Subramanyam et al. | Dec 2014 | A1 |
20150003453 | Sengupta et al. | Jan 2015 | A1 |
20150003455 | Haddad et al. | Jan 2015 | A1 |
20150009995 | Gross, IV et al. | Jan 2015 | A1 |
20150016279 | Zhang et al. | Jan 2015 | A1 |
20150023354 | Li et al. | Jan 2015 | A1 |
20150026345 | Ravinoothala et al. | Jan 2015 | A1 |
20150026362 | Guichard et al. | Jan 2015 | A1 |
20150030024 | Venkataswami et al. | Jan 2015 | A1 |
20150052262 | Chanda et al. | Feb 2015 | A1 |
20150052522 | Chanda et al. | Feb 2015 | A1 |
20150063102 | Mestery et al. | Mar 2015 | A1 |
20150063364 | Thakkar et al. | Mar 2015 | A1 |
20150071301 | Dalal | Mar 2015 | A1 |
20150073967 | Katsuyama et al. | Mar 2015 | A1 |
20150078384 | Jackson et al. | Mar 2015 | A1 |
20150092551 | Moisand et al. | Apr 2015 | A1 |
20150092564 | Aldrin | Apr 2015 | A1 |
20150103645 | Shen et al. | Apr 2015 | A1 |
20150103679 | Tessmer et al. | Apr 2015 | A1 |
20150103827 | Quinn et al. | Apr 2015 | A1 |
20150109901 | Fan et al. | Apr 2015 | A1 |
20150124608 | Agarwal et al. | May 2015 | A1 |
20150124622 | Kovvali et al. | May 2015 | A1 |
20150124840 | Bergeron | May 2015 | A1 |
20150138973 | Kumar et al. | May 2015 | A1 |
20150139041 | Bosch et al. | May 2015 | A1 |
20150146539 | Mehta et al. | May 2015 | A1 |
20150156035 | Foo et al. | Jun 2015 | A1 |
20150188770 | Naiksatam et al. | Jul 2015 | A1 |
20150195197 | Yong et al. | Jul 2015 | A1 |
20150213087 | Sikri | Jul 2015 | A1 |
20150215819 | Bosch et al. | Jul 2015 | A1 |
20150222640 | Kumar et al. | Aug 2015 | A1 |
20150236948 | Dunbar et al. | Aug 2015 | A1 |
20150237013 | Bansal et al. | Aug 2015 | A1 |
20150242197 | Alfonso et al. | Aug 2015 | A1 |
20150244617 | Nakil et al. | Aug 2015 | A1 |
20150263901 | Kumar et al. | Sep 2015 | A1 |
20150263946 | Tubaltsev et al. | Sep 2015 | A1 |
20150271102 | Antich | Sep 2015 | A1 |
20150280959 | Vincent | Oct 2015 | A1 |
20150281089 | Marchetti | Oct 2015 | A1 |
20150281098 | Pettit et al. | Oct 2015 | A1 |
20150281125 | Koponen et al. | Oct 2015 | A1 |
20150281179 | Raman et al. | Oct 2015 | A1 |
20150281180 | Raman et al. | Oct 2015 | A1 |
20150288671 | Chan et al. | Oct 2015 | A1 |
20150288679 | Ben-Nun et al. | Oct 2015 | A1 |
20150295831 | Kumar et al. | Oct 2015 | A1 |
20150319078 | Lee et al. | Nov 2015 | A1 |
20150319096 | Yip et al. | Nov 2015 | A1 |
20150358235 | Zhang et al. | Dec 2015 | A1 |
20150358294 | Kancharla et al. | Dec 2015 | A1 |
20150365322 | Shalzkamer et al. | Dec 2015 | A1 |
20150370586 | Cooper et al. | Dec 2015 | A1 |
20150370596 | Fahs et al. | Dec 2015 | A1 |
20150372840 | Benny et al. | Dec 2015 | A1 |
20150372911 | Yabusaki et al. | Dec 2015 | A1 |
20150379277 | Thota et al. | Dec 2015 | A1 |
20150381493 | Bansal et al. | Dec 2015 | A1 |
20150381494 | Cherian et al. | Dec 2015 | A1 |
20150381495 | Cherian et al. | Dec 2015 | A1 |
20160006654 | Fernando et al. | Jan 2016 | A1 |
20160028640 | Zhang et al. | Jan 2016 | A1 |
20160043901 | Sankar et al. | Feb 2016 | A1 |
20160043952 | Zhang et al. | Feb 2016 | A1 |
20160057050 | Ostrom et al. | Feb 2016 | A1 |
20160057687 | Horn et al. | Feb 2016 | A1 |
20160065503 | Yohe et al. | Mar 2016 | A1 |
20160080253 | Wang et al. | Mar 2016 | A1 |
20160087888 | Jain et al. | Mar 2016 | A1 |
20160094384 | Jain et al. | Mar 2016 | A1 |
20160094389 | Jain et al. | Mar 2016 | A1 |
20160094451 | Jain et al. | Mar 2016 | A1 |
20160094452 | Jain et al. | Mar 2016 | A1 |
20160094453 | Jain et al. | Mar 2016 | A1 |
20160094455 | Jain et al. | Mar 2016 | A1 |
20160094456 | Jain et al. | Mar 2016 | A1 |
20160094457 | Jain et al. | Mar 2016 | A1 |
20160094631 | Jain et al. | Mar 2016 | A1 |
20160094632 | Jain et al. | Mar 2016 | A1 |
20160094633 | Jain et al. | Mar 2016 | A1 |
20160094642 | Jain et al. | Mar 2016 | A1 |
20160094643 | Jain et al. | Mar 2016 | A1 |
20160094661 | Jain et al. | Mar 2016 | A1 |
20160099948 | Ott et al. | Apr 2016 | A1 |
20160105333 | Lenglet et al. | Apr 2016 | A1 |
20160119226 | Guichard et al. | Apr 2016 | A1 |
20160127306 | Wang et al. | May 2016 | A1 |
20160127564 | Sharma et al. | May 2016 | A1 |
20160134528 | Lin et al. | May 2016 | A1 |
20160149784 | Zhang et al. | May 2016 | A1 |
20160149816 | Wu et al. | May 2016 | A1 |
20160149828 | Vijayan et al. | May 2016 | A1 |
20160162320 | Singh et al. | Jun 2016 | A1 |
20160164776 | Biancaniello | Jun 2016 | A1 |
20160164787 | Roach et al. | Jun 2016 | A1 |
20160164826 | Riedel et al. | Jun 2016 | A1 |
20160173373 | Guichard et al. | Jun 2016 | A1 |
20160182684 | Connor et al. | Jun 2016 | A1 |
20160197831 | Foy et al. | Jul 2016 | A1 |
20160197839 | Li et al. | Jul 2016 | A1 |
20160203817 | Formhals et al. | Jul 2016 | A1 |
20160205015 | Halligan et al. | Jul 2016 | A1 |
20160212048 | Kaempfer et al. | Jul 2016 | A1 |
20160212237 | Nishijima | Jul 2016 | A1 |
20160218918 | Chu et al. | Jul 2016 | A1 |
20160226700 | Zhang et al. | Aug 2016 | A1 |
20160226754 | Zhang et al. | Aug 2016 | A1 |
20160226762 | Zhang et al. | Aug 2016 | A1 |
20160232019 | Shah et al. | Aug 2016 | A1 |
20160248685 | Pignataro et al. | Aug 2016 | A1 |
20160277210 | Lin et al. | Sep 2016 | A1 |
20160277294 | Akiyoshi | Sep 2016 | A1 |
20160294612 | Ravinoothala et al. | Oct 2016 | A1 |
20160294933 | Hong et al. | Oct 2016 | A1 |
20160294935 | Hong et al. | Oct 2016 | A1 |
20160308758 | Li et al. | Oct 2016 | A1 |
20160308961 | Rao | Oct 2016 | A1 |
20160337189 | Liebhart et al. | Nov 2016 | A1 |
20160337249 | Zhang et al. | Nov 2016 | A1 |
20160337317 | Hwang et al. | Nov 2016 | A1 |
20160344565 | Batz et al. | Nov 2016 | A1 |
20160344621 | Roeland et al. | Nov 2016 | A1 |
20160352866 | Gupta et al. | Dec 2016 | A1 |
20160366046 | Anantharam et al. | Dec 2016 | A1 |
20160373364 | Yokota | Dec 2016 | A1 |
20160378537 | Zou | Dec 2016 | A1 |
20170005882 | Tao et al. | Jan 2017 | A1 |
20170005920 | Previdi et al. | Jan 2017 | A1 |
20170005923 | Babakian | Jan 2017 | A1 |
20170005988 | Bansal et al. | Jan 2017 | A1 |
20170019303 | Swamy et al. | Jan 2017 | A1 |
20170019329 | Kozat et al. | Jan 2017 | A1 |
20170019331 | Yong | Jan 2017 | A1 |
20170019341 | Huang et al. | Jan 2017 | A1 |
20170026417 | Ermagan et al. | Jan 2017 | A1 |
20170033939 | Bragg et al. | Feb 2017 | A1 |
20170063683 | Li et al. | Mar 2017 | A1 |
20170063928 | Jain et al. | Mar 2017 | A1 |
20170064048 | Pettit et al. | Mar 2017 | A1 |
20170064749 | Jain et al. | Mar 2017 | A1 |
20170078176 | Lakshmikantha et al. | Mar 2017 | A1 |
20170078961 | Rabii et al. | Mar 2017 | A1 |
20170093698 | Farmanbar | Mar 2017 | A1 |
20170093758 | Chanda | Mar 2017 | A1 |
20170099194 | Wei | Apr 2017 | A1 |
20170126497 | Dubey et al. | May 2017 | A1 |
20170126522 | McCann et al. | May 2017 | A1 |
20170126726 | Han | May 2017 | A1 |
20170134538 | Mahkonen et al. | May 2017 | A1 |
20170142012 | Thakkar et al. | May 2017 | A1 |
20170147399 | Cropper et al. | May 2017 | A1 |
20170149582 | Cohn et al. | May 2017 | A1 |
20170149675 | Yang | May 2017 | A1 |
20170163531 | Kumar et al. | Jun 2017 | A1 |
20170163724 | Puri et al. | Jun 2017 | A1 |
20170171159 | Kumar et al. | Jun 2017 | A1 |
20170180240 | Kem et al. | Jun 2017 | A1 |
20170195255 | Pham et al. | Jul 2017 | A1 |
20170208000 | Bosch et al. | Jul 2017 | A1 |
20170208011 | Bosch et al. | Jul 2017 | A1 |
20170208532 | Zhou | Jul 2017 | A1 |
20170214627 | Zhang et al. | Jul 2017 | A1 |
20170220306 | Price et al. | Aug 2017 | A1 |
20170230333 | Glazemakers et al. | Aug 2017 | A1 |
20170230467 | Salgueiro et al. | Aug 2017 | A1 |
20170237656 | Gage | Aug 2017 | A1 |
20170250869 | Voellmy | Aug 2017 | A1 |
20170250902 | Rasanen et al. | Aug 2017 | A1 |
20170250917 | Ruckstuhl et al. | Aug 2017 | A1 |
20170251065 | Furr et al. | Aug 2017 | A1 |
20170257432 | Fu et al. | Sep 2017 | A1 |
20170264677 | Li | Sep 2017 | A1 |
20170273099 | Zhang et al. | Sep 2017 | A1 |
20170279938 | You et al. | Sep 2017 | A1 |
20170295021 | Gutiérrez et al. | Oct 2017 | A1 |
20170295100 | Hira et al. | Oct 2017 | A1 |
20170310588 | Zuo | Oct 2017 | A1 |
20170310611 | Kumar et al. | Oct 2017 | A1 |
20170317887 | Dwaraki et al. | Nov 2017 | A1 |
20170317926 | Penno et al. | Nov 2017 | A1 |
20170317936 | Swaminathan et al. | Nov 2017 | A1 |
20170317954 | Masurekar et al. | Nov 2017 | A1 |
20170317969 | Masurekar et al. | Nov 2017 | A1 |
20170318097 | Drew et al. | Nov 2017 | A1 |
20170324651 | Penno et al. | Nov 2017 | A1 |
20170331672 | Fedyk et al. | Nov 2017 | A1 |
20170339110 | Ni | Nov 2017 | A1 |
20170339600 | Roeland et al. | Nov 2017 | A1 |
20170346764 | Tan et al. | Nov 2017 | A1 |
20170353387 | Kwak et al. | Dec 2017 | A1 |
20170364794 | Mahkonen et al. | Dec 2017 | A1 |
20170366605 | Chang et al. | Dec 2017 | A1 |
20170373990 | Jeuk et al. | Dec 2017 | A1 |
20180004954 | Liguori et al. | Jan 2018 | A1 |
20180006935 | Mutnuru et al. | Jan 2018 | A1 |
20180026911 | Anholt et al. | Jan 2018 | A1 |
20180041425 | Zhang | Feb 2018 | A1 |
20180041470 | Schultz et al. | Feb 2018 | A1 |
20180041524 | Reddy et al. | Feb 2018 | A1 |
20180063018 | Bosch et al. | Mar 2018 | A1 |
20180063087 | Hira et al. | Mar 2018 | A1 |
20180091420 | Drake et al. | Mar 2018 | A1 |
20180102919 | Hao et al. | Apr 2018 | A1 |
20180102965 | Hari et al. | Apr 2018 | A1 |
20180115471 | Curcio et al. | Apr 2018 | A1 |
20180123950 | Garg et al. | May 2018 | A1 |
20180124061 | Raman et al. | May 2018 | A1 |
20180139098 | Sunavala et al. | May 2018 | A1 |
20180145899 | Rao | May 2018 | A1 |
20180159733 | Poon et al. | Jun 2018 | A1 |
20180159801 | Rajan et al. | Jun 2018 | A1 |
20180159943 | Poon et al. | Jun 2018 | A1 |
20180176177 | Bichot et al. | Jun 2018 | A1 |
20180176294 | Vacaro et al. | Jun 2018 | A1 |
20180183764 | Gunda | Jun 2018 | A1 |
20180184281 | Tamagawa et al. | Jun 2018 | A1 |
20180191600 | Hecker et al. | Jul 2018 | A1 |
20180198692 | Ansari et al. | Jul 2018 | A1 |
20180198705 | Wang et al. | Jul 2018 | A1 |
20180198791 | Desai et al. | Jul 2018 | A1 |
20180203736 | Vyas et al. | Jul 2018 | A1 |
20180205637 | Li | Jul 2018 | A1 |
20180213040 | Pak et al. | Jul 2018 | A1 |
20180219762 | Wang et al. | Aug 2018 | A1 |
20180227216 | Hughes | Aug 2018 | A1 |
20180234360 | Narayana et al. | Aug 2018 | A1 |
20180247082 | Durham et al. | Aug 2018 | A1 |
20180248713 | Zanier et al. | Aug 2018 | A1 |
20180248755 | Hecker et al. | Aug 2018 | A1 |
20180248790 | Tan et al. | Aug 2018 | A1 |
20180248986 | Dalal | Aug 2018 | A1 |
20180262427 | Jain et al. | Sep 2018 | A1 |
20180262434 | Koponen et al. | Sep 2018 | A1 |
20180278530 | Connor et al. | Sep 2018 | A1 |
20180288129 | Joshi et al. | Oct 2018 | A1 |
20180295036 | Krishnamurthy et al. | Oct 2018 | A1 |
20180295053 | Leung et al. | Oct 2018 | A1 |
20180302242 | Hao et al. | Oct 2018 | A1 |
20180309632 | Kompella et al. | Oct 2018 | A1 |
20180337849 | Sharma et al. | Nov 2018 | A1 |
20180349212 | Liu et al. | Dec 2018 | A1 |
20180351874 | Abhigyan et al. | Dec 2018 | A1 |
20190007382 | Nirwal et al. | Jan 2019 | A1 |
20190020580 | Boutros et al. | Jan 2019 | A1 |
20190020600 | Zhang et al. | Jan 2019 | A1 |
20190020684 | Qian et al. | Jan 2019 | A1 |
20190028347 | Johnston et al. | Jan 2019 | A1 |
20190028384 | Penno et al. | Jan 2019 | A1 |
20190028577 | D?Souza et al. | Jan 2019 | A1 |
20190036819 | Kancherla et al. | Jan 2019 | A1 |
20190068500 | Hira | Feb 2019 | A1 |
20190089679 | Kahalon et al. | Mar 2019 | A1 |
20190097838 | Sahoo et al. | Mar 2019 | A1 |
20190102280 | Caldato et al. | Apr 2019 | A1 |
20190108049 | Singh et al. | Apr 2019 | A1 |
20190116063 | Bottorff et al. | Apr 2019 | A1 |
20190121961 | Coleman et al. | Apr 2019 | A1 |
20190124096 | Ahuja et al. | Apr 2019 | A1 |
20190132220 | Boutros et al. | May 2019 | A1 |
20190132221 | Boutros et al. | May 2019 | A1 |
20190140863 | Nainar et al. | May 2019 | A1 |
20190140947 | Zhuang et al. | May 2019 | A1 |
20190140950 | Zhuang et al. | May 2019 | A1 |
20190149512 | Sevinc et al. | May 2019 | A1 |
20190149516 | Rajahalme et al. | May 2019 | A1 |
20190149518 | Sevinc et al. | May 2019 | A1 |
20190166045 | Peng et al. | May 2019 | A1 |
20190173778 | Faseela et al. | Jun 2019 | A1 |
20190173850 | Jain et al. | Jun 2019 | A1 |
20190173851 | Jain et al. | Jun 2019 | A1 |
20190222538 | Yang et al. | Jul 2019 | A1 |
20190229937 | Nagarajan et al. | Jul 2019 | A1 |
20190230126 | Kumar et al. | Jul 2019 | A1 |
20190238363 | Boutros et al. | Aug 2019 | A1 |
20190238364 | Boutros et al. | Aug 2019 | A1 |
20190268384 | Hu et al. | Aug 2019 | A1 |
20190286475 | Mani | Sep 2019 | A1 |
20190288915 | Denyer et al. | Sep 2019 | A1 |
20190288947 | Jain et al. | Sep 2019 | A1 |
20190306036 | Boutros et al. | Oct 2019 | A1 |
20190306086 | Boutros et al. | Oct 2019 | A1 |
20190342175 | Wan et al. | Nov 2019 | A1 |
20190377604 | Cybulski | Dec 2019 | A1 |
20190379578 | Mishra et al. | Dec 2019 | A1 |
20190379579 | Mishra et al. | Dec 2019 | A1 |
20200007388 | Johnston et al. | Jan 2020 | A1 |
20200036629 | Roeland et al. | Jan 2020 | A1 |
20200059761 | Li et al. | Feb 2020 | A1 |
20200067828 | Liu et al. | Feb 2020 | A1 |
20200073739 | Rungta et al. | Mar 2020 | A1 |
20200076684 | Naveen et al. | Mar 2020 | A1 |
20200076734 | Naveen et al. | Mar 2020 | A1 |
20200084141 | Bengough et al. | Mar 2020 | A1 |
20200084147 | Gandhi et al. | Mar 2020 | A1 |
20200136960 | Jeuk et al. | Apr 2020 | A1 |
20200145331 | Bhandari et al. | May 2020 | A1 |
20200162318 | Patil et al. | May 2020 | A1 |
20200162352 | Jorgenson et al. | May 2020 | A1 |
20200183724 | Shevade et al. | Jun 2020 | A1 |
20200195711 | Abhigyan et al. | Jun 2020 | A1 |
20200204492 | Sarva et al. | Jun 2020 | A1 |
20200213366 | Hong et al. | Jul 2020 | A1 |
20200220805 | Dhanabalan | Jul 2020 | A1 |
20200272493 | Lecuyer et al. | Aug 2020 | A1 |
20200272494 | Gokhale et al. | Aug 2020 | A1 |
20200272495 | Rolando et al. | Aug 2020 | A1 |
20200272496 | Mundaragi et al. | Aug 2020 | A1 |
20200272497 | Kavathia et al. | Aug 2020 | A1 |
20200272498 | Mishra et al. | Aug 2020 | A1 |
20200272499 | Feng et al. | Aug 2020 | A1 |
20200272500 | Feng et al. | Aug 2020 | A1 |
20200272501 | Chalvadi et al. | Aug 2020 | A1 |
20200274757 | Rolando et al. | Aug 2020 | A1 |
20200274769 | Naveen et al. | Aug 2020 | A1 |
20200274778 | Lecuyer et al. | Aug 2020 | A1 |
20200274779 | Rolando et al. | Aug 2020 | A1 |
20200274795 | Rolando et al. | Aug 2020 | A1 |
20200274801 | Feng et al. | Aug 2020 | A1 |
20200274808 | Mundaragi et al. | Aug 2020 | A1 |
20200274809 | Rolando et al. | Aug 2020 | A1 |
20200274810 | Gokhale et al. | Aug 2020 | A1 |
20200274826 | Mishra et al. | Aug 2020 | A1 |
20200274944 | Naveen et al. | Aug 2020 | A1 |
20200274945 | Rolando et al. | Aug 2020 | A1 |
20200287962 | Mishra et al. | Sep 2020 | A1 |
20200344088 | Selvaraj et al. | Oct 2020 | A1 |
20200358696 | Hu et al. | Nov 2020 | A1 |
20200364074 | Gunda et al. | Nov 2020 | A1 |
20200382412 | Chandrappa et al. | Dec 2020 | A1 |
20200382420 | Suryanarayana et al. | Dec 2020 | A1 |
20200389401 | Enguehard et al. | Dec 2020 | A1 |
20210004245 | Kamath et al. | Jan 2021 | A1 |
20210011812 | Mitkar et al. | Jan 2021 | A1 |
20210011816 | Mitkar et al. | Jan 2021 | A1 |
20210029088 | Mayya et al. | Jan 2021 | A1 |
20210067439 | Kommula et al. | Mar 2021 | A1 |
20210073736 | Alawi et al. | Mar 2021 | A1 |
20210117217 | Croteau et al. | Apr 2021 | A1 |
20210136147 | Giassa et al. | May 2021 | A1 |
20210240734 | Shah et al. | Aug 2021 | A1 |
20210266295 | Stroz | Aug 2021 | A1 |
20210271565 | Bhavanarushi et al. | Sep 2021 | A1 |
20210311758 | Cao et al. | Oct 2021 | A1 |
20210314310 | Cao et al. | Oct 2021 | A1 |
20210328913 | Nainar et al. | Oct 2021 | A1 |
20210349767 | Asayag et al. | Nov 2021 | A1 |
20210377160 | Faseela | Dec 2021 | A1 |
20220019698 | Durham et al. | Jan 2022 | A1 |
20220038310 | Boutros et al. | Feb 2022 | A1 |
20220060467 | Montgomery et al. | Feb 2022 | A1 |
Number | Date | Country |
---|---|---|
3034809 | Mar 2018 | CA |
1689369 | Oct 2005 | CN |
101594358 | Dec 2009 | CN |
101729412 | Jun 2010 | CN |
103516807 | Jan 2014 | CN |
103795805 | May 2014 | CN |
104471899 | Mar 2015 | CN |
107204941 | Sep 2017 | CN |
109213573 | Jan 2019 | CN |
110521169 | Nov 2019 | CN |
107105061 | Sep 2020 | CN |
112181632 | Jan 2021 | CN |
2426956 | Mar 2012 | EP |
2005311863 | Nov 2005 | JP |
2015519822 | Jul 2015 | JP |
9918534 | Apr 1999 | WO |
2008095010 | Aug 2008 | WO |
2014182529 | Nov 2014 | WO |
2016053373 | Apr 2016 | WO |
2016054272 | Apr 2016 | WO |
2019084066 | May 2019 | WO |
2019147316 | Aug 2019 | WO |
2019157955 | Aug 2019 | WO |
2019168532 | Sep 2019 | WO |
2019226327 | Nov 2019 | WO |
2020046686 | Mar 2020 | WO |
2020171937 | Aug 2020 | WO |
2021041440 | Mar 2021 | WO |
Entry |
---|
Portions of prosecution history of U.S. Appl. No. 14/569,249, Apr. 4, 2016, Jain, Jayant, et al. |
Portions of prosecution history of U.S. Appl. No. 14/569,452, Jun. 14, 2016, Jain, Jayant, et al. |
Author Unknown, “Datagram,” Jun. 22, 2012, 2 pages, retrieved from https://web.archive.org/web/20120622031055/https://en.wikipedia.org/wiki/datagram. |
Author Unknown, “AppLogic Features,” Jul. 2007, 2 pages. 3TERA, Inc. |
Author Unknown, “Enabling Service Chaining on Cisco Nexus 1000V Series,” Month Unknown, 2012, 25 pages, Cisco. |
Dixon, Colin, et al., “An End to the Middle,” Proceedings of the 12th Conference on Hot Topics in Operating Systems, May 2009, 5 pages, USENIX Association, Berkeley, CA, USA. |
Dumitriu, Dan Mihai, et al., (U.S. Appl. No. 61/514,990), filed Aug. 4, 2011, 31 pages. |
Greenberg, Albert, et al., “VL2: A Scalable and Flexible Data Center Network,” SIGCOMM '09, Aug. 17-21, 2009, 12 pages, ACM, Barcelona, Spain. |
Guichard, J., et al., “Network Service Chaining Problem Statement,” Network Working Group, Jun. 13, 2013, 14 pages, Cisco Systems, Inc. |
Halpern, J., et al., “Service Function Chaining (SFC) Architecture,” draft-ietf-sfc-architecture-02, Sep. 20, 2014, 26 pages, IETF. |
Joseph, Dilip Anthony, et al., “A Policy-aware Switching Layer for Data Centers,” Jun. 24, 2008, 26 pages, Electrical Engineering and Computer Sciences, University of California, Berkeley, CA, USA. |
Kumar, S., et al., “Service Function Chaining Use Cases in Data Centers,” draft-ietf-sfc-dc-use-cases-01, Jul. 21, 2014, 23 pages, IETF. |
Liu, W., et al., “Service Function Chaining (SFC) Use Cases,” draft-liu-sfc-use-cases-02, Feb. 13, 2014, 17 pages, IETF. |
Salsano, Stefano, et al., “Generalized Virtual Networking: An Enabler for Service Centric Networking and Network Function Virtualization,” 2014 16th International Telecommunications Network Strategy and Planning Symposium, Sep. 17-19, 2014, 7 pages, IEEE, Funchal, Portugal. |
Sekar, Vyas, et al., “Design and Implementation of a Consolidated Middlebox Architecture,” 9th USENIX Symposium on Networked Systems Design and Implementation, Apr. 25-27, 2012, 14 pages, USENIX, San Jose, CA, USA. |
Sherry, Justine, et al., “Making Middleboxes Someone Else's Problem: Network Processing as a Cloud Service,” In Proc. of SIGCOMM '12, Aug. 13-17, 2012, 12 pages, Helsinki, Finland. |
Casado, Martin, et al., “Virtualizing the Network Forwarding Plane,” Dec. 2010, 6 pages. |
Karakus, Murat, et al., “Quality of Service (QoS) in Software Defined Networking (SDN): A Survey,” Journal of Network and Computer Applications, Dec. 9, 2016, 19 pages, vol. 80, Elsevier, Ltd. |
Non-Published Commonly Owned U.S. Appl. No. 16/905,909, filed Jun. 18, 2020, 36 pages, Nicira, Inc. |
Lin, Po-Ching, et al., “Balanced Service Chaining in Software-Defined Networks with Network Function Virtualization,” Computer: Research Feature, Nov. 2016, 9 pages, vol. 49, No. 11, IEEE. |
Xiong, Gang, et al., “A Mechanism for Configurable Network Service Chaining and Its Implementation,” KSII Transactions on Internet and Information Systems, Aug. 2016, 27 pages, vol. 10, No. 8, KSII. |
Siasi, N., et al., “Container-Based Service Function Chain Mapping,” 2019 SoutheastCon, Apr. 11-14, 2019, 6 pages, IEEE, Huntsville, AL, USA. |
Author Unknown, “Research on Multi-tenancy Network Technology for Datacenter Network,” May 2015, 64 pages, Beijing Jiaotong University. |
Author Unknown, “MPLS,” Mar. 3, 2008, 47 pages. |
Cianfrani, Antonio, et al., “Translating Traffic Engineering Outcome into Segment Routing Paths: the Encoding Problem,” 2016 IEEE Conference on Computer Communications Workshops (Infocom Wkshps): GI 2016: 9th IEEE Global Internet Symposium, Apr. 10-14, 2016, 6 pages, IEEE, San Francisco, CA, USA. |
Li, Qing-Gu, “Network Virtualization of Data Center Security,” Information Security and Technology, Oct. 2012, 3 pages. |
Number | Date | Country | |
---|---|---|---|
20160094454 A1 | Mar 2016 | US |
Number | Date | Country | |
---|---|---|---|
62086136 | Dec 2014 | US | |
62083453 | Nov 2014 | US | |
62058044 | Sep 2014 | US |