Network-aware load balancing

Information

  • Patent Grant
  • 11792127
  • Patent Number
    11,792,127
  • Date Filed
    Tuesday, November 2, 2021
    2 years ago
  • Date Issued
    Tuesday, October 17, 2023
    8 months ago
Abstract
Some embodiments of the invention provide a method for network-aware load balancing for data messages traversing a software-defined wide area network (SD-WAN) (e.g., a virtual network) including multiple connection links between different elements of the SD-WAN. The method includes receiving, at a load balancer in a multi-machine site, link state data relating to a set of SD-WAN datapaths including connection links of the multiple connection links. The load balancer, in some embodiments, provides load balancing for data messages sent from a machine in the multi-machine site to a set of destination machines (e.g., web servers, database servers, etc.) connected to the load balancer over the set of SD-WAN datapaths. The load balancer selects, for the data message, a particular destination machine (e.g., a frontend machine for a set of backend servers) in the set of destination machines by performing a load balancing operation based on the received link state data.
Description

In recent years, several companies have brought to market solutions for deploying software-defined (SD) wide-area networks (WANs) for enterprises. Some such SD-WAN solutions use external third-party private or public cloud datacenters (clouds) to define different virtual WANs for different enterprises. These solutions typically have edge forwarding elements (called edge devices) at SD-WAN sites of an enterprise that connect with one or more gateway forwarding elements (called gateway devices or gateways) that are deployed in the third-party clouds.


In such a deployment, an edge device connects through one or more secure connections with a gateway, with these connections traversing one or more network links that connect the edge device with an external network. Examples of such network links include MPLS links, 5G LTE links, commercial broadband Internet links (e.g., cable modem links or fiber optic links), etc. The SD-WAN sites include branch offices (called branches) of the enterprise, and these offices are often spread across several different geographic locations with network links to the gateways of various different network connectivity types. Accordingly, load balancing in these deployments is often based on geo-proximity or measures of load on a set of load balanced destination machines. However, network links often exhibit varying network path characteristics with respect to packet loss, latency, jitter, etc., that can affect a quality of service or quality of experience. Such multi-site load balancing in SD-WAN implementation needs to be reliable and resilient.


BRIEF SUMMARY

Some embodiments of the invention provide a method for network-aware load balancing for data messages traversing a software-defined wide-area network (SD-WAN) (e.g., a virtual network) including multiple connection links (e.g., tunnels) between different elements of the SD-WAN (e.g., edge node forwarding elements, hubs, gateways, etc.). The method receives, at a load balancer in a multi-machine site of the SD-WAN, link state data relating to a set of SD-WAN datapaths including connection links of the multiple connection links. The load balancer, in some embodiments, uses the received link state to provide load balancing for data messages sent from a source machine in the multi-machine site to a set of destination machines (e.g., web servers, database servers, etc.) connected to the load balancer through the set of SD-WAN datapaths.


The load balancer receives a data message sent by the source machine in the multi-machine site to a destination machine in the set of destination machines. The load balancer selects, for the data message, a particular destination machine (e.g., a frontend machine for a set of backend servers) in the set of destination machines by performing a load balancing operation based on the received link state data. The data message is then forwarded to the selected particular destination machine in the set of destination machines. In addition to selecting the particular destination machine, in some embodiments, a particular datapath is selected to reach the particular destination machine based on the link state data.


In some embodiments, a controller cluster of the SD-WAN receives data regarding link characteristics from a set of elements (e.g., forwarding elements such as edge nodes, hubs, gateways, etc.) of the SD-WAN connected by the plurality of connection links. The SD-WAN controller cluster generates link state data relating to the plurality of connection links based on the received data regarding connection link characteristics. The generated link state data is then provided to the load balancer of the SD-WAN multi-machine site for the load balancer to use in making load balancing decisions.


In some embodiments, the controller cluster provides the link state data to SD-WAN elements, which in turn provide the link state data to their associated load balancers. These SD-WAN elements in some embodiments include SD-WAN devices that are collocated with the load balancers at the SD-WAN multi-machine sites. In other embodiments, the controller cluster provides the link state data directly to the load balancers at multi-machine sites, such as branch sites, datacenter sites, etc.


In some embodiments, the link state data is a set of criteria used to make load balancing decisions (e.g., a set of criteria specified by a load balancing policy). In other embodiments, the load balancer uses the link state data (e.g., statistics regarding aggregated load on each link) to derive a set of criteria used to make load balancing decisions. The set of criteria, in some embodiments, is a set of weights used in the load balancing process. In other embodiments, the link state data includes the following attributes of a connection link: packet loss, latency, signal jitter, a quality of experience (QoE) score, etc., that are included in the set of criteria used to make the load balancing decision or are used to derive the set of criteria (e.g., used to derive a weight used as a criteria).


In some embodiments, the load balancer also uses other load balancing criteria received from the destination machines or tracked at the load balancer, such as a CPU load, a memory load, a session load, etc. of the destination machine (or a set of backend servers for which the destination machine is a frontend). The link state data and the other load balancing criteria, in some embodiments, are used to generate a single weight for each destination machine. In other embodiments, the other load balancing criteria are used to calculate a first set of weights for each destination machine while the link state data is used to calculate a second set of weights for a set of datapaths to the set of destination machines.


In some embodiments, the link state data is generated for each connection link between elements of the SD-WAN, while in other embodiments the link state data is generated for each of a set of datapaths that are defined by a specific set of connection links used to traverse the SD-WAN elements connecting the load balancer and a particular destination machine (e.g., an SD-WAN edge node, frontend for a set of backend nodes, etc.) at a multi-machine site (e.g., private cloud datacenter, public cloud datacenter, software as a service (SaaS) public cloud, enterprise datacenter, branch office, etc.). In yet other embodiments, the link state data is generated for collections of datapaths connecting the load balancer and a particular data machine in the set of data machines. When the generated link state data relates to individual connection links, the load balancer, in some embodiments, derives the load balancing criteria for each datapath based on the link state data related to the individual connection links.


The preceding Summary is intended to serve as a brief introduction to some embodiments of the invention. It is not meant to be an introduction or overview of all inventive subject matter disclosed in this document. The Detailed Description that follows and the Drawings that are referred to in the Detailed Description will further describe the embodiments described in the Summary as well as other embodiments. Accordingly, to understand all the embodiments described by this document, a full review of the Summary, the Detailed Description, the Drawings, and the Claims is needed. Moreover, the claimed subject matters are not to be limited by the illustrative details in the Summary, the Detailed Description, and the Drawings.





BRIEF DESCRIPTION OF THE DRAWINGS

The novel features of the invention are set forth in the appended claims. However, for purposes of explanation, several embodiments of the invention are set forth in the following figures.



FIG. 1 illustrates an example of a virtual network that is created for a particular entity using a hub that is deployed in a public cloud datacenter of a public cloud provider.



FIG. 2 illustrates a first multi-machine site hosting a set of machines that connect to a set of destination machines in a set of multi-machine SD-WAN sites.



FIG. 3 illustrates a network in which a load balancing device receives load attribute data from sets of servers (e.g., destination machines) and a set of SD-WAN attributes (e.g., link state data) from an SD-WAN edge forwarding element based on a set of SD-WAN attributes sent from a set of SD-WAN controllers.



FIG. 4 conceptually illustrates a process for generating link state data and providing the link state data to a load balancer in an SD-WAN.



FIG. 5 conceptually illustrates a process for calculating a set of load balancing criteria based on a set of received link state data and destination machine load attributes.



FIG. 6 conceptually illustrates a process used in some embodiments to provide load balancing for a set of destination machines.



FIG. 7 illustrates a network in which a load balancing device uses a single weight associated with each of a set of destination machines (or datapaths) located at multiple SD-WAN sites to select a destination machine for each received data message.



FIG. 8 illustrates a network in which a load balancing device uses a load weight and a network weight associated with each of a set of destination machines located at multiple SD-WAN sites to select a destination machine for each received data message.



FIG. 9 illustrates a network in which a load balancing device uses a load weight and a network weight associated with each of a set of datapaths to a set of SD-WAN sites to select a particular datapath to a particular SD-WAN site for each received data message.



FIG. 10 illustrates a full mesh network among a set of SD-WAN edge nodes and a set of SD-WAN hubs connected by connection links of different qualities.



FIG. 11 illustrates an embodiment of a GSLB system that can use network-aware load balancing.



FIG. 12 illustrates an embodiment including a network-aware GSLB system deployed in an SD-WAN using network-aware load balancing.



FIG. 13 conceptually illustrates a computer system with which some embodiments of the invention are implemented.





DETAILED DESCRIPTION

In the following detailed description of the invention, numerous details, examples, and embodiments of the invention are set forth and described. However, it will be clear and apparent to one skilled in the art that the invention is not limited to the embodiments set forth and that the invention may be practiced without some of the specific details and examples discussed.


Some embodiments of the invention provide a method for network-aware load balancing for data messages traversing a software-defined wide-area network (SD-WAN) (e.g., a virtual network) including multiple connection links (e.g., tunnels, virtual private networks (VPNs), etc.) between different elements of the SD-WAN (e.g., edge node forwarding elements, hubs, gateways, etc.). The method receives, at a load balancer in a multi-machine site (e.g., a branch office, datacenter, etc.) of the SD-WAN, link state data relating to a set of SD-WAN datapaths, including link state data for the multiple connection links. The load balancer, in some embodiments, uses the provided link state to provide load balancing for data messages sent from a source machine in the multi-machine site to a set of destination machines (e.g., web servers, database servers, containers, pods, virtual machines, compute nodes, etc.) connected to the load balancer through the set of SD-WAN datapaths.


As used in this document, data messages refer to a collection of bits in a particular format sent across a network. One of ordinary skill in the art will recognize that the term data message may be used herein to refer to various formatted collections of bits that may be sent across a network, such as Ethernet frames, IP packets, TCP segments, UDP datagrams, etc. Also, as used in this document, references to L2, L3, L4, and L7 layers (or layer 2, layer 3, layer 4, layer 7) are references, respectively, to the second data link layer, the third network layer, the fourth transport layer, and the seventh application layer of the OSI (Open System Interconnection) layer model.



FIG. 1 illustrates an example of a virtual network 100 that is created for a particular entity using SD-WAN forwarding elements deployed at branch sites, datacenters, and public clouds. Examples of public clouds are public clouds provided by Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, etc., while examples of entities include a company (e.g., corporation, partnership, etc.), an organization (e.g., a school, a non-profit, a government entity, etc.), etc.


In FIG. 1, the SD-WAN forwarding elements include cloud gateway 105 and SD-WAN forwarding elements 130, 132, 134, 136. The cloud gateway (CGW) in some embodiments is a forwarding element that is in a private or public datacenter 110. The CGW 105 in some embodiments has secure connection links (e.g., tunnels) with edge forwarding elements (e.g., SD-WAN edge forwarding elements (FEs) 130, 132, 134, and 136) at the particular entity's multi-machine sites (e.g., SD-WAN edge sites 120, 122, and 124 with multiple machines 150), such as branch offices, datacenters, etc. These multi-machine sites are often at different physical locations (e.g., different buildings, different cities, different states, etc.) and are referred to below as multi-machine sites or nodes.


Four multi-machine sites 120-126 are illustrated in FIG. 1, with three of them being branch sites 120-124, and one being a datacenter 126. Each branch site is shown to include an edge forwarding node 130-134, while the datacenter site 126 is shown to include a hub forwarding node 136. The datacenter SD-WAN forwarding node 136 is referred to as a hub node because in some embodiments this forwarding node can be used to connect to other edge forwarding nodes of the branch sites 120-124. The hub node in some embodiments provides services (e.g., middlebox services) for packets that it forwards from one site to another branch site. The hub node also provides access to the datacenter resources 156, as further described below.


Each edge forwarding element (e.g., SD-WAN edge FEs 130-134) exchanges data messages with one or more cloud gateways 105 through one or more connection links 115 (e.g., multiple connection links available at the edge forwarding element). In some embodiments, these connection links include secure and unsecure connection links, while in other embodiments they only include secure connection links. As shown by edge node 134 and gateway 105, multiple secure connection links (e.g., multiple secure tunnels that are established over multiple physical links) can be established between one edge node and a gateway.


When multiple such links are defined between an edge node and a gateway, each secure connection link in some embodiments is associated with a different physical network link between the edge node and an external network. For instance, to access external networks, an edge node in some embodiments has one or more commercial broadband Internet links (e.g., a cable modem, a fiber optic link) to access the Internet, an MPLS (multiprotocol label switching) link to access external networks through an MPLS provider's network, a wireless cellular link (e.g., a 5G LTE network). In some embodiments, the different physical links between the edge node 134 and the cloud gateway 105 are the same type of links (e.g., are different MPLS links).


In some embodiments, one edge forwarding node 130-134 can also have multiple direct links 115 (e.g., secure connection links established through multiple physical links) to another edge forwarding node 130-134, and/or to a datacenter hub node 136. Again, the different links in some embodiments can use different types of physical links or the same type of physical links. Also, in some embodiments, a first edge forwarding node of a first branch site can connect to a second edge forwarding node of a second branch site (1) directly through one or more links 115, or (2) through a cloud gateway or datacenter hub to which the first edge forwarding node connects through two or more links 115. Hence, in some embodiments, a first edge forwarding node (e.g., 134) of a first branch site (e.g., 124) can use multiple SD-WAN links 115 to reach a second edge forwarding node (e.g., 130) of a second branch site (e.g., 120), or a hub forwarding node 136 of a datacenter site 126.


The cloud gateway 105 in some embodiments is used to connect two SD-WAN forwarding nodes 130-136 through at least two secure connection links 115 between the gateway 105 and the two forwarding elements at the two SD-WAN sites (e.g., branch sites 120-124 or datacenter site 126). In some embodiments, the cloud gateway 105 also provides network data from one multi-machine site to another multi-machine site (e.g., provides the accessible subnets of one site to another site). Like the cloud gateway 105, the hub forwarding element 136 of the datacenter 126 in some embodiments can be used to connect two SD-WAN forwarding nodes 130-134 of two branch sites through at least two secure connection links 115 between the hub 136 and the two forwarding elements at the two branch sites 120-124.


In some embodiments, each secure connection link between two SD-WAN forwarding nodes (i.e., CGW 105 and edge forwarding nodes 130-136) is formed as a VPN (virtual private network) tunnel between the two forwarding nodes. In this example, the collection of the SD-WAN forwarding nodes (e.g., forwarding elements 130-136 and gateways 105) and the secure connections 115 between the forwarding nodes forms the virtual network 100 for the particular entity that spans at least public or private cloud datacenter 110 to connect the branch and datacenter sites 120-126.


In some embodiments, secure connection links are defined between gateways in different public cloud datacenters to allow paths through the virtual network to traverse from one public cloud datacenter to another, while no such links are defined in other embodiments. Also, in some embodiments, the gateway 105 is a multi-tenant gateway that is used to define other virtual networks for other entities (e.g., other companies, organizations, etc.). Some such embodiments use tenant identifiers to create tunnels between a gateway and edge forwarding element of a particular entity, and then use tunnel identifiers of the created tunnels to allow the gateway to differentiate data message flows that it receives from edge forwarding elements of one entity from data message flows that it receives along other tunnels of other entities. In other embodiments, gateways are single-tenant and are specifically deployed to be used by just one entity.



FIG. 1 illustrates a cluster of controllers 140 that serves as a central point for managing (e.g., defining and modifying) configuration data that is provided to the edge nodes and/or gateways to configure some or all of the operations. In some embodiments, this controller cluster 140 is in one or more public cloud datacenters, while in other embodiments it is in one or more private datacenters. In some embodiments, the controller cluster 140 has a set of manager servers that define and modify the configuration data, and a set of controller servers that distribute the configuration data to the edge forwarding elements (FEs), hubs and/or gateways. In some embodiments, the controller cluster 140 directs edge forwarding elements and hubs to use certain gateways (i.e., assigns a gateway to the edge forwarding elements and hubs). The controller cluster 140 also provides next hop forwarding rules and load balancing criteria in some embodiments.



FIG. 2 illustrates a branch multi-machine site 205 hosting a set of machines 206 that connects to a set of destination machines (e.g., servers 241-243) in a set of other multi-machine sites 261-263, which in this example are all datacenters. The connections are made through a load balancer 201, an SD-WAN edge FE 230, and a set of connection links 221-224 to SD-WAN cloud gateways 231-232 and SD-WAN edge FE 233 (collectively, “SD-WAN edge devices”). In some embodiments, SD-WAN cloud gateways 231 and 232 are multi-tenant SD-WAN edge devices deployed at a public cloud datacenter to provide SD-WAN services to software as a service (SaaS), infrastructure as a service (IaaS), and cloud network services as well as access to private backbones.


In some embodiments, the CGW 232 is deployed in the same public datacenter 262 as the servers 242, while in other embodiments it is deployed in another public datacenter. Similarly, in some embodiments, the CGW 231 is deployed in the same public datacenter 261 as the servers 241, while in other embodiments it is deployed in another public datacenter. As illustrated, connection links 221-223 utilize public Internet 270, while connection link 224 utilizes a private network 280 (e.g., an MPLS provider's network). The connection links 221-224, in some embodiments, are secure tunnels (e.g., IPSec tunnels) used to implement a virtual private network.



FIG. 2 also illustrates a set of one or more SD-WAN controllers 250 executing at the private datacenter 263. Like controller cluster 140 of FIG. 1, the set of SD-WAN controllers 250 manage a particular SD-WAN implemented by connection links 221-224. In some embodiments, the set of SD-WAN controllers 250 receive data regarding link characteristics of connection links (e.g., connection links 221-224) used to implement the SD-WAN from elements (e.g., SD-WAN edge devices 230-233) of the SD-WAN connected by the connection links. The set of SD-WAN controllers 250 generate link state data relating to the connection links based on the received data regarding connection link characteristics. The generated link state data is then provided to the load balancer 201 of the SD-WAN multi-machine site 205 for the load balancer to use in making load balancing decisions. The specific operations at the set of controllers 250 and the load balancer 201 will be explained below in more detail in relation to FIGS. 4-6.



FIG. 3 illustrates a network 300 in which a load balancing device 301 receives (1) load attribute data 370 (e.g., including load attributes 371-373) relating to the load on the sets of servers 341-343 (which are the destination machines in this example) and (2) a set of SD-WAN attributes 312 (e.g., link state data) from SD-WAN edge FE 330 based on a set of SD-WAN attributes 311 sent from a set of SD-WAN controllers 350. In some embodiments, the SD-WAN attributes 311 and 312 are identical, while in other embodiments, the SD-WAN edge FE 330 modifies SD-WAN attributes 311 to generate link state data for consumption by the local load balancer 301.


Load attributes 371-373, in some embodiments, are sent to SD-WAN controller 350 for this controller to aggregate and send to the load balancing device 301. In some embodiments, the SD-WAN controller 350 generates weights and/or other load balancing criteria from the load attributes that it receives. In these embodiments, the controller 350 provides the generated weights and/or other load balancing criteria to the load balancer 301 to use in performing its load balancing operations to distribute the data message load among the SD-WAN datacenter sites 361-363. In other embodiments, the load balancing device 301 generates the weights and/or other load balancing criteria from the load attributes 370 that it receives from non-controller modules and/or devices at datacenter sites 361-363, or receives from the controller 350.


Network 300 includes four edge forwarding elements 330-333 that connect four sites 360-363 through an SD-WAN established by these forwarding elements and the secure connections 321-323 between them. In the illustrated embodiment, the SD-WAN edge devices 331 and 332 serve as frontend load-balancing devices for the backend servers 341 and 342, respectively, and are identified as the destination machines (e.g., by virtual IP addresses associated with their respective sets of servers).


In some embodiments, an SD-WAN edge forwarding element (e.g., SD-WAN edge FE 333) provides a received data message destined for its associated local set of servers (e.g., server set 343) to a local load balancing service engine (e.g., service engine 344) that provides the load balancing service to distribute data messages among the set of servers 343. Each set of servers 341-343 is associated with a set of load balancing weights LW341-LW343, which represent the collective load on the servers of each server set. The load balancer 301 uses the load balancing weights to determine how to distribute the data message load from a set of machines 306 among the different server sets 341-343.


In addition, the load balancing device for each server set (e.g., the CGW 331 or service engine 344 for the server set 341 or 343) in some embodiments uses another set of load balancing weights (e.g., one that represents the load on the individual servers in the server set) to determine how to distribute the data message load among the servers in the set (e.g., by performing based on the weights in the set a round robin selection of the servers in the set for successive flows, in the embodiments where different weights in the set are associated with different servers).


In different embodiments, the load attributes 371-373 are tracked differently. For instance, in some embodiments, the servers 341-343 track and provide the load attributes. In other embodiments, this data is tracked and provided by load tracking modules that execute on the same host computers as the servers, or that are associated with these computers. In still other embodiments, the load attributes are collected by the load balancing devices and/or modules (e.g., CGW 331 or service engine 344) that receive the data messages forwarded by the load balancer 301 and that distribute these data messages amongst the servers in their associated server set.



FIG. 4 conceptually illustrates a process 400 for generating link state data and providing the link state data to one or more load balancers in an SD-WAN. Process 400, in some embodiments, is performed by an SD-WAN controller or a set of SD-WAN controllers (e.g., SD-WAN controllers 250 or 350). The process 400 begins by receiving (at 410) connection link attribute data from a set of SD-WAN elements (e.g., SD-WAN edge FEs, gateways, hubs, etc.) at one or more multi-machine sites. In some embodiments, the connection link attributes are received based on a request from the set of SD-WAN controllers or a long-pull operation established with each SD-WAN element to be notified of changes to connection link attributes. The connection link attributes, in some embodiments, include at least one of a measure of latency, a measure of loss, a measure of jitter, and a measure of a quality of experience (QoE).


The process 400 then generates (at 420) link state data associated with each connection link associated with the received link state data. The link state data, in some embodiments, is aggregate link state data for a set of connection links connecting a pair of SD-WAN elements (e.g., SD-WAN edge FEs, hubs, and gateways). For example, in some embodiments, an SD-WAN edge FE connects to an SD-WAN gateway using multiple connection links (e.g., a public internet connection link, an MPLS connection link, a wireless cellular link, etc.) that the SD-WAN may use to support a particular communication between a source machine and a destination machine in the set of destination machines (e.g., by using multiple communication links in the aggregate set for a same communication session to reduce the effects of packet loss along either path). Accordingly, the aggregate link state data, in such an embodiment, reflects the characteristics of the set of connection links as it is used by the SD-WAN edge FE to connect to the SD-WAN gateway.


In some embodiments, the link state data includes both current and historical data (e.g., that a particular connection link flaps every 20 minutes, that a particular connection link latency increases during a particular period of the day or week, etc.). In some embodiments, the historical data is incorporated into a QoE measure, while in other embodiments, the historical data is used to provide link state data (e.g., from the SD-WAN edge FE) that reflects patterns in connectivity data over time (e.g., increased latency or jitter during certain hours, etc.).


In some embodiments, the link state data is a set of criteria that includes criteria used by a load balancer to make load balancing decisions. The set of criteria, in some embodiments, includes a set of weights that are used by the load balancer in conjunction with a set of weights based on characteristics of the set of destination machines among which the load balancer balances. In some embodiments, the set of criteria provided as link state data are criteria specified in a load balancing policy. In other embodiments, the link state data is used by the load balancer to generate criteria (e.g., weights) used to perform the load balancing. The use of the link state data in performing the load balancing operation is discussed in more detail in relation to FIG. 5.


The generated link state data is then provided (at 430) to one or more load balancers (or set of load balancers) at one or more SD-WAN sites. In some embodiments, the set of SD-WAN controllers provides (at 430) the generated link state data to an SD-WAN element (e.g., a collocated SD-WAN edge FE) that, in turn provides the link state data to the load balancer. The generated link state data provided to a particular load balancer, in some embodiments, includes only link state data that is relevant to a set of connection links used to connect to a set of destination machines among which the load balancer distributes data messages (e.g., excluding “dead-end” connection links from a hub or gateway to an edge node not executing on a destination machine in the set of destination machines).


Process 400 ends after providing (at 430) the generated link state data to one or more load balancers at one or more SD-WAN sites. The process 400 repeats (i.e., is performed periodically or iteratively) based on detected events (e.g., the addition of a load balancer, the addition of an SD-WAN element, a connection link failure, etc.), according to a schedule, or as attribute data is received from SD-WAN elements.



FIG. 5 conceptually illustrates a process 500 for calculating a set of load balancing criteria based on a set of received link state data and destination machine load attributes. Process 500, in some embodiments, is performed by a load balancer (e.g., load balancer 301) at an SD-WAN site. In other embodiments, this process is performed by a server or controller associated with this load balancer (e.g., load balancer 301). In some embodiments, this server or controller executes on the same device (e.g., same computer) as the load balancer (e.g., load balancer 301), or executes on a device in the same datacenter as the load balancer (e.g., load balancer 301).


Process 500 begins by receiving (at 510) load data regarding a current load on a set of candidate destination machines (e.g., a set of servers associated with a virtual IP (VIP) address) from which the load balancer selects a destination for a particular data message flow. The load data, in some embodiments, includes information relating to a CPU load, a memory load, a session load, etc., for each destination machine in the set of destination machines.


In some embodiments, a load balancer maintains information regarding data message flows distributed to different machines in the set of destination machines, and additional load data is received from other load balancers at the same SD-WAN site or at different SD-WAN sites that distribute data messages among the same set of destination machines. Examples of a distributed load balancer (implemented by a set of load balancing service engines) is provided in FIGS. 11 and 12. Conjunctively or alternatively, load data (or a capacity used to calculate load data) in some embodiments is received from the set of destination machines.


The process 500 also receives (at 520) link state data relating to connection links linking the load balancer to the set of destination machines. As described above, in some embodiments, the link state data is a set of criteria that are specified in a load balancing policy. For example, in some embodiments, a load balancing policy may specify calculating a single weight for each destination machine based on a set of load measurements and a set of connectivity measurements. In other embodiments, a load balancing policy may specify calculating a first load-based weight and a second connectivity-based weight. In either of these embodiments the set of connectivity measurements is, or is based on, the received link state data. The weights, in some embodiments, are used to perform a weighted round robin or other similar weight-based load balancing operation. One of ordinary skill in the art will appreciate that receiving the load data and link state data, in some embodiments, occurs in a different order, or each occurs periodically, or each occurs based on different triggering events (e.g., after a certain number of load balancing decisions made by a related load balancer, upon a connection link failure, etc.).


After receiving the load and link state data, the process 500 calculates (at 530) a set of weights for each destination machine. In some embodiments, the set of weights for a particular destination machine includes a first load-based weight and a second connectivity-based weight. An embodiment using two weights is discussed below in relation to FIG. 6. In some embodiments, the load data and the link state data are used to generate a single weight associated with each destination machine. In other embodiments, the load balancer uses the link state data to identify multiple possible paths (e.g., datapaths) for reaching a particular destination machine, calculates a weight associated with each datapath based on the load data and the link state data for connection links that make up the path, and treats each path as a potential destination as in table 760B of FIG. 7 discussed below. A load balancer, in some embodiments, then performs a round robin operation based on the calculated weights (e.g., a weighted round robin).



FIG. 6 conceptually illustrates a process 600 used in some embodiments to provide load balancing for a set of destination machines. Process 600 is performed, in some embodiments, by each load balancer in an SD-WAN site that selects particular destination machines from a set of destination machines at another SD-WAN site. In some embodiments, a load balancer operating at a particular edge site performs the load balancing operation before providing a data message to a collocated SD-WAN edge FE at the edge site.


As illustrated in FIG. 3, the set of destination machines can be distributed across several sites 361-363, and a load balancer associated with each of these sites can then select one destination machine at each of these sites after the process 600 selects one of these sites. Alternatively, the process 600 in some embodiments selects individual destination machines at some sites, while having a load balancer at another site select individual destination machines at that site. In still other embodiments, the process 600 selects individual destination machines at each other site, rather than having another load balancer associated with each other site select any amongst the destination machines at those sites.


The process 600 begins by receiving (at 610) a data message destined to a set of machines. In some embodiments, the data message is addressed to a VIP that is associated with the set of destination machines or is a request (e.g., a request for content) associated with the set of destination machines. The set of destination machines includes a subset of logically grouped machines (e.g., servers, virtual machines, Pods, etc.) that appear to the load balancer as a single destination machine at a particular location (e.g., SD-WAN site, datacenter, etc.).


The process 600 then identifies (at 620) a set of candidate destination machines or datapaths based on the load data relating to the set of destination machines. In some embodiments, the identified set of candidate destination machines (or datapaths) is based on a weight that relates to a load on the destination machines. For example, in an embodiment that uses a least connection method of load balancing, the set of candidate destination machines is identified as the set of “n” destination machines with the fewest number of active connections. One of ordinary skill in the art will appreciate that the least connection method is one example of a load balancing operation based on selecting a least-loaded destination machine and that other measures of load can be used as described in relation to the least connection method.


In some embodiments, the value of “n” is an integer that is less than the number of destination machines in the set of destination machines. The value of “n” is selected, in some embodiments, to approximate a user-defined or default fraction (e.g., 10%, 25%, 50%, etc.) of the destination machines. Instead of using a fixed number of candidate destination machines, some embodiments identify a set of candidate machines based on a load-based weight being under or over a threshold that can be dynamically adjusted based on the current load-based weights. For example, if the least-loaded destination is measured to have a weight “WLL” (e.g., representing using 20% of its capacity) the candidate destination machines may be identified based on being within a certain fixed percentage (P) of the weight (e.g., WLL<WCDM<WLL+P) or being no more than some fixed factor (A) times the weight of the least-loaded destination machine (e.g., WLL<WCDM<A*WLL), where A is greater than 1. Similarly, if a load-based weight measures excess capacity, a minimum threshold can be calculated by subtraction by P or division by A in the place of the addition and multiplication used to calculate upper thresholds.


In some embodiments, identifying the set of candidate destination machines includes identifying a set of candidate datapaths associated with the set of candidate destination machines. In some such embodiments, a set of datapaths to reach the candidate destination machine is identified for each candidate destination machine. Some embodiments identify only a single candidate destination machine (e.g., identify the least-loaded destination machine) and the set of candidate datapaths includes only the datapaths to the single candidate destination machine.


After identifying (at 620) the set of candidate destination machines or datapaths based on the load data, a destination machine or datapath for the data message is selected (at 630) based on the link state data. In some embodiments, the link state data is a connectivity-based weight calculated by an SD-WAN and provided to the load balancer. In other embodiments, the link state data includes data regarding link characteristics that the load balancer uses to calculate the connectivity-based weight. Selecting the destination machine for a data message, in some embodiments, includes selecting the destination machine associated with a highest (or lowest) connectivity-based weight in the set of candidate destination machines. The connectivity-based weight, in some embodiments, is based on at least one of a measure of latency, a measure of loss, or a measure of jitter. In some embodiments, the connectivity-based weight is based on a QoE measurement based on some combination of connection link attribute data (e.g., if provided by the set of controllers) or link state data for one or more connection links (e.g., a set of connection links between a source edge node and a destination machine, a set of connection links making up a datapath, etc.).


The data message is then forwarded (at 640) to the selected destination machine and, in some embodiments, along the selected datapath. In some embodiments that select a particular datapath, a collocated SD-WAN edge FE provides the load balancer with information used to distinguish between different datapaths. In some embodiments in which the destination machine is selected but the datapath is not, the SD-WAN edge FE performs a connectivity optimization process to use one or more of the connection links that can be used to communicate with the destination machine.



FIGS. 7-12 illustrate embodiments implementing network-aware load balancing as described above. FIG. 7 illustrates a network 700 in which a load balancer 701 uses a single weight associated with each of a set of destination machines (e.g., server clusters 741-743 or datapaths) located at multiple SD-WAN sites 751-753 to select a SD-WAN site for each received data message. Network 700 includes four SD-WAN sites 750-753 associated with SD-WAN edge forwarding nodes 730-733. In the illustrated embodiment the SD-WAN FEs 731-733 serve as frontend load balancers for the backend servers 741-743, respectively, and are identified as the destination machines. In other embodiments, the backend servers are directly selected by the load balancer 701.


Each set of servers 741-743 is associated with a set of load balancing weights that are used in some embodiments by the front end load balancing forwarding nodes 731-733 to distribute the data message load across the servers of their associated server sets 741-743. Each server set 741-743 is also associated with a set of load balancing weights LW741-LW743 that are used by the load balancer 701 to distribute the data message load among the different server sets. In some embodiments, the load balancing weights are derived from the set of load data (e.g., CPU load, memory load, session load, etc.) provided to, or maintained, at the load balancer 701. Also, in some embodiments, the load balancing weights LW741-LW743 represent the collective load among the servers of each server set, while the load balancing weights used by the forwarding nodes 731-733 represents the load among the individual servers in each server set associated with each forwarding node.


The network 700 also includes a set of SD-WAN hubs 721-723 that facilitate connections between SD-WAN edge forwarding nodes 730-733 in some embodiments. SD-WAN hubs 721-723, in some embodiments, execute in different physical locations (e.g., different datacenters) while in other embodiments some or all of the SD-WAN hubs 721-723 are in a single hub cluster at a particular physical location (e.g., an enterprise datacenter). SD-WAN hubs 721-723, in the illustrated embodiment, provide connections between the SD-WAN edge forwarding nodes 730-733 of the SD-WAN sites. In this example, communications between SD-WAN forwarding nodes have to pass through an SD-WAN hub so that data messages receive services (e.g., firewall, deep packet inspection, other middlebox services, etc.) provided at the datacenter in which the hub is located. In other embodiments (e.g., the embodiments illustrated in FIGS. 2, 3, and 10), edge forwarding nodes have direct node-to-node connections, and communication between pairs of such nodes uses these connections and does not pass through any intervening hub or CGW.


The load balancer 701 receives the load balancing data (i.e., load weights LW741-LW743) and link state data (e.g., network weights (NW)) for the connection links between the SD-WAN elements. The link state data, as described above in relation to FIGS. 4 and 5, is either a set of network weights or is used to calculate the set of network weights used by the load balancer. The link state data is generated differently in different embodiments. For instance, in some embodiments, it is generated by link-state monitors associated with the edge forwarding nodes 730-733 (e.g., monitors at the same location or executing on the same computers as the forwarding nodes), while in other embodiments, it is generated by the SD-WAN controllers.



FIG. 7 illustrates two different load balancing embodiments using load balancing information 760A and 760B that include a list of destination machines 761A and 761B, respectively, and a list of weights 762A and 762B, respectively, associated with (1) the list of destination machines, which in this example are server sets 741-743, and (2) the list of paths to the destination machines. As indicated by the function notation in the tables 762A and 762B, the weight in lists 762A and 762B are a function of a load weight and a network weight for a particular destination machine.


Between the edge forwarding element 730 and a destination edge forwarding element associated with a selected server set, there can be multiple paths through multiple links of the edge forwarding element 730 and multiple hubs. For instance, there are three paths between the forwarding elements 730 and 731 through hubs 721-723. If the forwarding element 730 connects to one hub through multiple physical links (e.g., connects to hub 721 through two datapaths using two physical links of the forwarding element 730), then multiple paths would exist between the forwarding elements 730 and 731 through the multiple datapaths (facilitated by the multiple physical links of the forwarding element 730) between the forwarding element 730 and the hub 721.


As mentioned above, the load balancers use different definitions of a destination machine in different embodiments. Load balancing information 760A defines destination machines using the edge nodes 731-733 (representing the sets of servers 741-743) such that a particular edge node (e.g., the edge node 731) is selected. The particular edge node is selected based on a weight that is a function of a load weight (e.g., LW741) associated with the edge node and a network weight (e.g., NW0X) associated with a set of datapaths available to reach the edge node. The network weight (e.g., NW0X) in turn is a function of a set of network weights associated with each connection link or set of connection links available to reach the destination machine.


For example, to calculate the network weight NW0X, a load balancer, SD-WAN controller, or SD-WAN edge FE determines all the possible paths to the SD-WAN node 731 and calculates a network weight for each path based on link state data received regarding the connection links that make up the possible paths. Accordingly, NW0X is illustrated as a function of network weights NW0AX, NW0ABX, NW0BX, NW0BAX, and NW0CX calculated for each connection link based on link state data. The link state data for a particular connection link, in some embodiments, reflects not only the characteristics of the intervening network but also reflects the functionality of the endpoints of the connection link (e.g., an endpoint with an overloaded queue may increase the rate of data message loss, jitter, or latency). In some embodiments, the link state data is used directly to calculate the network weight NW0X instead of calculating intermediate network weights.


Load balancing information 760B defines destination machines using the datapaths to edge nodes 731-733 (representing the sets of servers 741-743) such that a particular datapath to a particular edge node is selected. The particular datapath is selected based on a weight (e.g., a destination weight) that is a function of a load weight (e.g., LW741) associated with the particular edge node that the datapath connects to the source edge node and a network weight (e.g., NW0AX) associated with the particular datapath. The network weight (e.g., NW0AX), in turn is a function of a set of network weights associated with each connection link that define the particular datapath.


For example, to calculate the network weight NW0AX, a load balancer, SD-WAN controller, or SD-WAN edge FE determines the communication links used in the datapath to the SD-WAN node 731 and calculates a network weight (e.g., NW0A and NWAX) for each path based on link state data received regarding the connection links that make up the datapath. In some embodiments, the link state data is used directly to calculate the network weight NW0AX instead of calculating intermediate network weights. In some embodiments, the weight is also affected by the number of possible paths such that a capacity of a destination machine (e.g., set of servers) reflected in the weight value also reflects the fact that the same set of servers is identified by multiple destination machines defined by datapaths.


Under either approach, the use of network characteristics (e.g., link state data) that would otherwise be unavailable to the load balancer allows the load balancer to make better decisions than could be made without the network information. For instance, a load balancing operation based on a least connection method (e.g., based on the assumption that it has the most capacity) without network information may identify a destination machine that is connected by a connection link (or set of connection links) that is not reliable or has lower capacity than the destination machine. In such a situation, the real utilization of the available resources is higher than that reflected by the number of connections, and without network information would be identified as having a higher capacity than a different destination machine that has more capacity when the network information is taken into account. Accordingly, reliability, speed, and QoE of the links between a load balancer and a destination machine can be considered when making a load balancing decision.



FIG. 8 illustrates a network 800 in which a load balancing device 801 uses a load weight 862 and a network weight 863 associated with each of a set of destination machines 861 (e.g., server clusters 841-843) located at multiple SD-WAN sites to select a destination machine for each received data message. The network 800 includes four edge nodes 830-833 associated with four SD-WAN sites 850-853. In the illustrated embodiment the SD-WAN forwarding nodes 831-833 serve as frontend devices for the backend servers 841-843, respectively, and are identified as the destination machines. Each set of servers 841-843 is associated with a load weight LW841-LW843 which in some embodiments represents a set of load data (e.g., CPU load, memory load, session load, etc.) provided to, or maintained at, the load balancer 801.


The network 800 also includes a set of SD-WAN hubs 821-823 that facilitate connections between SD-WAN edge devices in some embodiments. As in FIG. 7, SD-WAN hubs 821-823, in some embodiments, execute in different physical locations (e.g., different datacenters) while in other embodiments two or more of SD-WAN hubs 821-823 are in a single hub cluster at a particular physical location (e.g., an enterprise datacenter). SD-WAN hubs 821-823, in the illustrated embodiment, serve as interconnecting hubs for the connections between the SD-WAN edge devices 830-833.


The load balancer 801 receives the load balancing data 860 (i.e., load weights LW841-LW843) and link state data (e.g., network weights (NW)) for the connection links between the SD-WAN elements. The load balancing information 860 defines destination machines using the edge nodes 831-833 (representing the sets of servers 841-843) such that a particular edge node (e.g., the edge node 831 associated with server set 841) is selected. Specifically, the load balancer 801 uses both the load balancing data and link state data as weight values for performing its selection of the different server sets as the different destinations for the different data message flows.


In some embodiments, the load balancer 801 produces an aggregate weight from both of the network and load weights NW and LW associated with a server set, and then uses the aggregated weights to select a server set among the server sets for a data message flow. In other embodiments, it does not generate aggregate weight from the network and load weights but uses another approach (e.g., uses the network weights as constraints to eliminate one or more of the server sets when the SD-WAN connections to the server sets are unreliable).


The link state data, as described above in relation to FIGS. 4 and 5, is either a set of network weights or is used to calculate the set of network weights used by the load balancer. In some embodiments, load balancing information 860 associates the destination machines with a single network weight NW calculated for the set of datapaths available to reach the edge node. In some embodiments, the network weight for a particular SD-WAN forwarding node 831, 832 or 833 is a function of the network weights associated with each path from the SD-WAN forwarding node 830 to the particular SD-WAN forwarding node 831, 832 or 833, as illustrated by the equations in FIG. 8, and as described above by reference to FIG. 7. The selection of a particular edge node for a data message is performed, in some embodiments, as described in relation to FIG. 6 for embodiments that select among edge nodes or destination machines instead of datapaths.



FIG. 9 illustrates a network 900 in which a load balancing device 901 uses a load weight 962 and a network weight 964 associated with each of a set of datapaths 963 (e.g., AX, BX, etc.) to a set of edge forwarding nodes of the SD-WAN to select a particular datapath to a particular edge node for each received data message. This network 900 includes four edge forwarding nodes 930-933 associated with four SD-WAN sites 950-953. In the illustrated embodiment, the SD-WAN FEs 931-933 serve as frontend load-balancing devices for the backend servers 941-943, respectively, and are identified as the destination machines. Each set of servers 931-933 is associated with a load weight LW941-LW943, which in some embodiments represents a set of load data (e.g., CPU load, memory load, session load, etc.) provided to, or maintained at, the load balancer.


The network 900 also includes a set of SD-WAN hubs 921-923 that facilitate connections between SD-WAN edge devices in some embodiments. As in FIG. 7, SD-WAN hubs 921-923, in some embodiments, execute in different physical locations (e.g., different datacenters) while in other embodiments some or all of the SD-WAN hubs 921-923 are in a single hub cluster at a particular physical location (e.g., an enterprise datacenter). SD-WAN hubs 921-923, in the illustrated embodiment, provide connections between the SD-WAN edge devices 930-933.


The load balancer 901 receives the load balancing data 960 (i.e., load weights LW941-LW943) and link state data (e.g., network weights (NW)) for the connection links between the SD-WAN elements. The link state data, as described above in relation to FIGS. 4 and 5, is either a set of network weights or is a set of attributes used to calculate the set of network weights used by the load balancer. As for load balancing information 960, load balancing information 960 has a destination machine identifier 961 (which in some embodiments identifies one of the edge nodes 931-933) to represent the server sets 941-943, and associates each destination with a load weight 962.


Additionally, load balancing information 960 identifies each datapath 963 to an edge node and stores a network weight 964 for each datapath 963. The network weight of each datapath, in some embodiments, is received as link state data, while in other embodiments the link state data is connection link attribute data (e.g., an intermediate network weight, or measures of connection link attributes) that is used to calculate the network weight for each datapath.


Based on the load weight 962, the load balancer 901 initially performs a first-load balancing operation to select (e.g., through a round robin selection that is based on the load weight) a particular candidate edge node from a set of candidate edge nodes. To do this, the load balancer in some embodiments performs an operation similar to operation 620 of FIG. 6. Based on the network weight, the load balancing operation then performs a second load-balancing operation (similar to operation 630 of FIG. 6) to select (e.g., through a round robin selection that is based on the network weight) a particular datapath to a selected particular edge node from one or more candidate datapaths to the particular edge node. By using this two-step load balancing operation, the load balancer 901 can identify candidate destination machines that meet certain criteria and then apply knowledge of the intervening network to select a particular datapath to a candidate destination machine that meets a different set of criteria that take into account a quality of the network connectivity (e.g., meets a minimum QoE metric).



FIG. 10 illustrates a full mesh network among a set of SD-WAN edge nodes 1030-1032 and a set of SD-WAN hubs 1021-1023 connected by connection links of different qualities. In the illustrated embodiment, each connection link is assigned a network weight (e.g., a score) that is then compared to a set of two threshold network weights “T1” and “T2” that, in some embodiments, are user-specified. In other embodiments, the single network weight is replaced by a set of network weights for different attributes that can be used for load balancing different applications that are sensitive to different attributes of the connection links (e.g., flows that place heavier weight on speed (low latency) than on jitter or packet loss). The choice of two threshold values is selected for illustrative purposes and is not to be understood to be limiting.


Exemplary network weight calculations for each individual datapath and for collections of datapaths are illustrated using table 1002 which provides a legend identifying network weights of each connection link and equations 1003 and 1004. Equations 1003 and 1004 represent a simple min or max equation that identifies the network weight associated with the weakest connection link in a datapath as the network weight for the individual datapath and the network weight associated with the datapath with the highest network weight in a set of datapaths as the network weight for the set of datapaths between a source and a destination.


Using the minimum value for a particular datapath reflects the fact that for a particular datapath defined as traversing a particular set of connection links, the worst (e.g., slowest, most lossy, etc.) connection link will limit the connectivity along the datapath. In contrast, for a set of datapaths, the best datapath can be selected such that the best datapath defines the connectivity of the source and destination. For specific characteristics, such as a loss rate, a multiplicative formula, in some embodiments, will better reflect the loss rate (e.g., a number of data messages received divided by the total number of data messages sent). One of ordinary skill in the art will appreciate that the functions can be defined in many ways based on the number of different characteristics or attributes being considered and how they interact.


The results of equations 1003 and 1004 are illustrated in table 1005 identifying each individual datapath from SD-WAN Edge FE 1030 to SD-WAN FE 1031 (e.g., gateway “X”). Similar equations can be used to identify a network weight for datapaths (and the set of datapaths) from SD-WAN Edge FE 1030 to SD-WAN FE 1032 (e.g., gateway “Y”). As discussed above, some embodiments use the network weights for the individual datapaths to make load balancing decisions, while some embodiments use the network weight for the set of datapaths connecting a source and destination. However, one of ordinary skill in the art will appreciate that more complicated formulas that take into account the number of hops, or the individual characteristics that were used to calculate the network weight for each connection link, are used to compute a network weight or other value associated with each datapath or destination.


In the examples illustrated in FIGS. 2, 3, and 7-10, each edge forwarding node is said to perform the load balancing operations to select one destination machine from a set of destination machines associated with the edge forwarding node. In some embodiments, the edge forwarding node performs the load balancing operations by executing a load-balancing process. In other embodiments, the edge forwarding node directs a load balancer or set of load balancers that are co-located with the edge forwarding node at an SD-WAN site to perform the load-balancing operations for new data message flows that the edge forwarding node receives, and then forwards the data message flows to the destination machines selected by the load balancer(s). In still other embodiments, the edge forwarding node simply forwards the data message flows to a load balancer operating in the same SD-WAN site, and this load balancer selects the destination machines for each data message flow and forwards each flow to the destination machine that the load balancer selects.



FIG. 11 illustrates a GSLB system 1100 that uses the network-aware load balancing of some embodiments. In this example, backend application servers 1105a-d are deployed in four datacenters 1102-1108: three of which are private datacenters 1102-1106 and one of which is a public datacenter 1108. The datacenters 1102-1108 in this example are in different geographical sites (e.g., different neighborhoods, different cities, different states, different countries, etc.).


A cluster of one or more controllers 1110 are deployed in each datacenter 1102-1108. Each datacenter 1102-1108 also has a cluster 1115 of load balancers 1117 to distribute the data message load across the backend application servers 1105 in the datacenter. In this example, three datacenters 1102, 1104, and 1108 also have a cluster 1120 of DNS service engines 1125 to perform DNS operations to process (e.g., to provide network addresses for a domain name) for DNS requests submitted by machines 1130 inside or outside of the datacenters. In some embodiments, the DNS requests include requests for fully qualified domain name (FQDN) address resolutions.



FIG. 11 illustrates the resolution of an FQDN that refers to a particular application “A” that is executed by the servers of the domain acme.com. As shown, this application is accessed through https and the URL “A.acme.com.” The DNS request for this application is resolved in three steps. First, a public DNS resolver 1160 initially receives the DNS request and forwards this request to the private DNS resolver 1165 of the enterprise that owns or manages the private datacenters 1102-1106.


Second, the private DNS resolver 1165 selects one of the DNS clusters 1120. This selection is based on a set of load balancing criteria that distributes the DNS request load across the DNS clusters 1120. In the example illustrated in FIG. 11, the private DNS resolver 1165 selects the DNS cluster 1120b of the datacenter 1104.


Third, the selected DNS cluster 1120b resolves the domain name to an IP address. In some embodiments, each DNS cluster 1120 includes multiple DNS service engines 1125, such as DNS service virtual machines (SVMs) that execute on host computers in the cluster's datacenter. When a DNS cluster 1120 receives a DNS request, a frontend load balancer (not shown) in some embodiments selects a DNS service engine 1125 in the cluster 1120 to respond to the DNS request, and forwards the DNS request to the selected DNS service engine 1125. Other embodiments do not use a frontend load balancer, and instead have a DNS service engine 1125 serve as a frontend load balancer that selects itself or another DNS service engine 1125 in the same cluster 1120 for processing the DNS request.


The DNS service engine 1125b that processes the DNS request then uses a set of criteria to select one of the backend server clusters 1105 for processing data message flows from the machine 1130 that sent the DNS request. The set of criteria for this selection in some embodiments includes at least one of (1) load weights identifying some measure of load on each backend cluster 1105, (2) a set of network weights as described above reflecting a measure of connectivity, and (3) a set of health metrics as further described in U.S. patent application Ser. No. 16/746,785 filed on Jan. 17, 2020, now published as U.S. Patent Publication 2020/0382584, which is incorporated herein by reference. Also, in some embodiments, the set of criteria include load balancing criteria that the DNS service engines use to distribute the data message load on backend servers that execute application “A.”


In the example illustrated in FIG. 11, the selected backend server cluster is the server cluster 1105c in the private datacenter 1106. After selecting this backend server cluster 1105c for the DNS request that it receives, the DNS service engine 1125b of the DNS cluster 1120b returns a response to the requesting machine. As shown, this response includes the VIP address associated with the selected backend server cluster 1105c. In some embodiments, this VIP address is associated with the local load balancer cluster 1115c that is in the same datacenter 1106 as the selected backend server cluster.


After getting the VIP address, the machine 1130 sends one or more data message flows to the VIP address for a backend server cluster 1105 to process. In this example, the data message flows are received by the local load balancer cluster 1115c. In some embodiments, each load balancer cluster 1115 has multiple load balancing engines 1117 (e.g., load balancing SVMs) that execute on host computers in the cluster's datacenter.


When the load balancer cluster receives the first data message of the flow, a frontend load balancer (not shown) in some embodiments selects a load balancing service engine 1117 in the cluster 1115 to select a backend server 1105 to receive the data message flow, and forwards the data message to the selected load balancing service engine 1117. Other embodiments do not use a frontend load balancer, and instead have a load balancing service engine in the cluster serve as a frontend load balancer that selects itself or another load balancing service engine in the same cluster for processing the received data message flow.


When a selected load balancing service engine 1117 processes the first data message of the flow, this service engine 1117 uses a set of load balancing criteria (e.g., a set of weight values) to select one backend server from the cluster of backend servers 1105c in the same datacenter 1106. The load balancing service engine 1117 then replaces the VIP address with an actual destination IP (DIP) address of the selected backend server 1105c, and forwards the data message and subsequent data messages of the same flow to the selected back end server 1105c. The selected backend server 1105c then processes the data message flow, and when necessary, sends a responsive data message flow to the machine 1130. In some embodiments, the responsive data message flow is through the load balancing service engine 1117 that selected the backend server 1105c for the initial data message flow from the machine 1130.



FIG. 12 illustrates an embodiment including a network-aware GSLB system 1200 deployed in an SD-WAN using network-aware load balancing. The system 1200 includes a set of four datacenters 1202-1208, three of which are private datacenters 1202-1206 and one of which is a public datacenter 1208 as in FIG. 11. The set of four datacenters 1202-1208 are part of the SD-WAN, and each hosts an SD-WAN edge device 1245 (e.g., a multi-tenant SD-WAN edge FE, gateway or hub) that facilitates communications within the SD-WAN. The four datacenters 1202-1208, in this embodiment, are connected by a set of hubs 1250a-b in datacenters 1275a-b (e.g., a private or public datacenter) that facilitate communication between external or internal machines 1230a-b and the backend servers 1205. As shown, external machine 1230a connects to the hubs 1250a-b through the internet 1270, and the hubs 1250a-b may also serve as gateways for access to external networks or machines.


As in FIG. 3, the SD-WAN controller cluster 1240 sends link state data (LSD) to other load balancing elements of the SD-WAN. In system 1200, the controller cluster 1240 generates (1) link state data (e.g., DNS-LSD 1241) for load balancing among the DNS servers and (2) link state data (e.g., APP-LSD 1242) for load balancing among the applications (i.e., the sets of backend servers 1205). The DNS-LSD 1241 is provided to the private DNS resolver 1265 to be used to perform the first level of load balancing among the DNS servers in the different data servers based on load weights and the link state data (or data derived from the link state data) and a set of load balancing criteria similarly to the process for selecting a destination machine described above in relation to FIGS. 6-10. The APP-LSD 1242 is provided to the DNS service engines 1225a-d to perform the second level of load balancing among the backend server clusters 1205a-d based on load balancing criteria or load weights and the link state data (or data derived from the link state data) and a set of load balancing criteria, similarly to the process for selecting a destination machine described above in relation to FIGS. 6-10. In the illustrated embodiment, the load balancer clusters 1115a-d are not provided with any link state data as connections within a datacenter are not usually subject to the same variations in connectivity as connection links between datacenters.


Many of the above-described features and applications are implemented as software processes that are specified as a set of instructions recorded on a computer-readable storage medium (also referred to as computer-readable medium). When these instructions are executed by one or more processing unit(s) (e.g., one or more processors, cores of processors, or other processing units), they cause the processing unit(s) to perform the actions indicated in the instructions. Examples of computer-readable media include, but are not limited to, CD-ROMs, flash drives, RAM chips, hard drives, EPROMs, etc. The computer-readable media does not include carrier waves and electronic signals passing wirelessly or over wired connections.


In this specification, the term “software” is meant to include firmware residing in read-only memory or applications stored in magnetic storage, which can be read into memory for processing by a processor. Also, in some embodiments, multiple software inventions can be implemented as sub-parts of a larger program while remaining distinct software inventions. In some embodiments, multiple software inventions can also be implemented as separate programs. Finally, any combination of separate programs that together implement a software invention described here is within the scope of the invention. In some embodiments, the software programs, when installed to operate on one or more electronic systems, define one or more specific machine implementations that execute and perform the operations of the software programs.



FIG. 13 conceptually illustrates a computer system 1300 with which some embodiments of the invention are implemented. The computer system 1300 can be used to implement any of the above-described hosts, controllers, gateway and edge forwarding elements. As such, it can be used to execute any of the above-described processes. This computer system 1300 includes various types of non-transitory machine-readable media and interfaces for various other types of machine-readable media. Computer system 1300 includes a bus 1305, processing unit(s) 1310, a system memory 1325, a read-only memory 1330, a permanent storage device 1335, input devices 1340, and output devices 1345.


The bus 1305 collectively represents all system, peripheral, and chipset buses that communicatively connect the numerous internal devices of the computer system 1300. For instance, the bus 1305 communicatively connects the processing unit(s) 1310 with the read-only memory 1330, the system memory 1325, and the permanent storage device 1335.


From these various memory units, the processing unit(s) 1310 retrieve instructions to execute and data to process in order to execute the processes of the invention. The processing unit(s) may be a single processor or a multi-core processor in different embodiments. The read-only-memory (ROM) 1330 stores static data and instructions that are needed by the processing unit(s) 1310 and other modules of the computer system. The permanent storage device 1335, on the other hand, is a read-and-write memory device. This device is a non-volatile memory unit that stores instructions and data even when the computer system 1300 is off. Some embodiments of the invention use a mass-storage device (such as a magnetic or optical disk and its corresponding disk drive) as the permanent storage device 1335.


Other embodiments use a removable storage device (such as a floppy disk, flash drive, etc.) as the permanent storage device 1335. Like the permanent storage device 1335, the system memory 1325 is a read-and-write memory device. However, unlike storage device 1335, the system memory 1325 is a volatile read-and-write memory, such as random access memory. The system memory 1325 stores some of the instructions and data that the processor needs at runtime. In some embodiments, the invention's processes are stored in the system memory 1325, the permanent storage device 1335, and/or the read-only memory 1330. From these various memory units, the processing unit(s) 1310 retrieve instructions to execute and data to process in order to execute the processes of some embodiments.


The bus 1305 also connects to the input and output devices 1340 and 1345. The input devices 1340 enable the user to communicate information and select commands to the computer system 1300. The input devices 1340 include alphanumeric keyboards and pointing devices (also called “cursor control devices”). The output devices 1345 display images generated by the computer system 1300. The output devices 1345 include printers and display devices, such as cathode ray tubes (CRT) or liquid crystal displays (LCD). Some embodiments include devices such as touchscreens that function as both input and output devices 1340 and 1345.


Finally, as shown in FIG. 13, bus 1305 also couples computer system 1300 to a network 1365 through a network adapter (not shown). In this manner, the computer 1300 can be a part of a network of computers (such as a local area network (“LAN”), a wide area network (“WAN”), or an Intranet), or a network of networks (such as the Internet). Any or all components of computer system 1300 may be used in conjunction with the invention.


Some embodiments include electronic components, such as microprocessors, storage and memory that store computer program instructions in a machine-readable or computer-readable medium (alternatively referred to as computer-readable storage media, machine-readable media, or machine-readable storage media). Some examples of such computer-readable media include RAM, ROM, read-only compact discs (CD-ROM), recordable compact discs (CD-R), rewritable compact discs (CD-RW), read-only digital versatile discs (e.g., DVD-ROM, dual-layer DVD-ROM), a variety of recordable/rewritable DVDs (e.g., DVD-RAM, DVD-RW, DVD+RW, etc.), flash memory (e.g., SD cards, mini-SD cards, micro-SD cards, etc.), magnetic and/or solid state hard drives, read-only and recordable Blu-Ray® discs, ultra-density optical discs, any other optical or magnetic media, and floppy disks. The computer-readable media may store a computer program that is executable by at least one processing unit and includes sets of instructions for performing various operations. Examples of computer programs or computer code include machine code, such as is produced by a compiler, and files including higher-level code that are executed by a computer, an electronic component, or a microprocessor using an interpreter.


While the above discussion primarily refers to microprocessors or multi-core processors that execute software, some embodiments are performed by one or more integrated circuits, such as application-specific integrated circuits (ASICs) or field-programmable gate arrays (FPGAs). In some embodiments, such integrated circuits execute instructions that are stored on the circuit itself.


As used in this specification, the terms “computer”, “server”, “processor”, and “memory” all refer to electronic or other technological devices. These terms exclude people or groups of people. For the purposes of the specification, the terms “display” or “displaying” mean displaying on an electronic device. As used in this specification, the terms “computer-readable medium,” “computer-readable media,” and “machine-readable medium” are entirely restricted to tangible, physical objects that store information in a form that is readable by a computer. These terms exclude any wireless signals, wired download signals, and any other ephemeral or transitory signals.


While the invention has been described with reference to numerous specific details, one of ordinary skill in the art will recognize that the invention can be embodied in other specific forms without departing from the spirit of the invention. For instance, several of the above-described embodiments deploy gateways in public cloud datacenters. However, in other embodiments, the gateways are deployed in a third-party's private cloud datacenters (e.g., datacenters that the third-party uses to deploy cloud gateways for different entities in order to deploy virtual networks for these entities). Thus, one of ordinary skill in the art would understand that the invention is not to be limited by the foregoing illustrative details, but rather is to be defined by the appended claims.

Claims
  • 1. A method of providing network-aware load balancing for data messages traversing a software-defined wide area network (SD-WAN) comprising a plurality of connection links between different elements of the SD-WAN, the method comprising: at an SD-WAN controller; receiving data regarding link state characteristics of a plurality of physical connection links at a first SD-WAN site from a plurality of SD-WAN elements of the SD-WAN, wherein at least two SD-WAN elements in the plurality of elements that provide the received data to the SD-WAN controller are at second and third SD-WAN sites respectively and communicate with the first site through one or more links of the plurality of physical connection links;generating link state data relating to the plurality of physical connection links at the first SD-WAN site based on the received data regarding link state characteristics; andproviding the generated link state data to a load balancer of the first SD-WAN site, wherein the load balancer generates load balancing criteria based on the generated link state data and based on the generated load balancing criteria distributes flows that are addressed to a common destination among the plurality of physical connection links.
  • 2. The method of claim 1, wherein providing the link state data to the load balancer comprises providing the link state data to an SD-WAN edge device of the first SD-WAN site that provides the link state data to the load balancer.
  • 3. The method of claim 1, wherein generating link state data relating to the plurality of physical connection links comprises: identifying a set of datapaths connecting the load balancer to each destination machine in a set of destination machines, each datapath comprising an ordered set of connection links; andgenerating link state data for each datapath based on the received data regarding link state characteristics.
  • 4. The method of claim 1 further comprising: receiving a set of load data for destination machines in a set of destination machines, wherein generating the link state data is further based on the received load data, and the link state data comprises a set of weights for the load balancer to use to use to provide the load balancing.
  • 5. The method of claim 2, wherein the link state data is modified by the SD-WAN edge device of the first SD-WAN site for consumption by the load balancer.
  • 6. The method of claim 3, wherein the link state data comprises a current measure of latency for each datapath in the set of datapaths.
  • 7. The method of claim 3, wherein the link state data comprises a measure of latency for each SD-WAN connection link that is included in any datapath in the set of datapaths, and a measure of latency for a particular datapath is calculated based on the received measures of latency for each SD-WAN connection link that makes up the datapath.
  • 8. The method of claim 3, wherein the link state data regarding link state characteristics comprises a current measure of data message loss for each datapath in the set of datapaths.
  • 9. The method of claim 3, wherein the link state data relating to the plurality of physical connections that is generated based on recieved data regarding link state characteristics comprises a current measure of jitter for each datapath in the set of datapaths.
  • 10. The method of claim 3, wherein the link state data relating to the plurality of physical connections that is generated based on recieved data regarding link state characteristics comprises a current measure of quality of experience score for each datapath in the set of datapaths based on at least one of a current measure of latency, a current measure of data message loss, and a current measure of jitter for the datapath.
  • 11. The method of claim 4, wherein the link state data comprises a single weight for each destination machine based on the load data and data regarding link state characteristics.
  • 12. The method of claim 4, wherein the link state data comprises, for each destination machine, (1) a first load weight indicating at least one of a CPU load, a memory load, and a session load based on the received load data and (2) a second network weight associated with a set of connection links connecting the load balancer to the destination machine based on the received link state characteristic data.
  • 13. The method of claim 6, wherein the link state data regarding link state characteristics further comprises a historical measure of latency for each datapath in the set of datapaths.
  • 14. The method of claim 7, wherein the measure of latency for a particular datapath is a maximum latency of any communication link included in the datapath.
  • 15. The method of claim 8, wherein the link state data regarding link state characteristics further comprises a historical measure of data message loss for each datapath in the set of datapaths.
  • 16. The method of claim 8, wherein: the current measure of data message loss comprises a loss rate expressed as a number between 0 and 1 that reflects a number of data messages sent across the datapath that reach their destination;the data regarding link state characteristics comprises a loss rate for each SD-WAN connection link that is included in any datapath in the set of datapaths; andthe current measure of data message loss for a datapath in the set of datapaths is based on multiplying a loss rate for each SD-WAN connection link that is included in the datapath in the set of datapaths.
  • 17. The method of claim 9, wherein the link state data relating to the plurality of physical connections that is generated based on recieved data regarding link state characteristics further comprises a historical measure of jitter for each datapath in the set of datapaths.
  • 18. The method of claim 10, wherein the link state data relating to the plurality of physical connections that is generated based on recieved data regarding link state characteristics further comprises a historical measure of quality of experience score for each datapath in the set of datapaths based on at least one of a historical measure of latency, a historical measure of data message loss, and a historical measure of jitter for the datapath.
  • 19. The method of claim 3, wherein a plurality of datapaths connecting the load balancer to the set of destination machines is identified and the link state data comprises, for each datapath, (1) a first load weight indicating at least one of a CPU load, a memory load, and a session load on the associated destination machine based on the received load data and (2) a second network weight associated with a set of connection links making up the datapath based on the received link state characteristic data.
  • 20. The method of claim 3, wherein the set of destination machines comprises a set of frontend load balancers for a set of backend compute nodes.
  • 21. A non-transitory machine readable medium storing a program for execution by at least one processor, the program for performing any one of the method claims 1-20.
Priority Claims (1)
Number Date Country Kind
202141002321 Jan 2021 IN national
US Referenced Citations (953)
Number Name Date Kind
5652751 Sharony Jul 1997 A
5909553 Campbell et al. Jun 1999 A
6154465 Pickett Nov 2000 A
6157648 Voit et al. Dec 2000 A
6201810 Masuda et al. Mar 2001 B1
6363378 Conklin et al. Mar 2002 B1
6445682 Weitz Sep 2002 B1
6744775 Beshai et al. Jun 2004 B1
6976087 Westfall et al. Dec 2005 B1
7003481 Banka et al. Feb 2006 B2
7280476 Anderson Oct 2007 B2
7313629 Nucci et al. Dec 2007 B1
7320017 Kurapati et al. Jan 2008 B1
7373660 Guichard et al. May 2008 B1
7581022 Griffin et al. Aug 2009 B1
7680925 Sathyanarayana et al. Mar 2010 B2
7681236 Tamura et al. Mar 2010 B2
7751409 Carolan Jul 2010 B1
7962458 Holenstein et al. Jun 2011 B2
8094575 Vadlakonda et al. Jan 2012 B1
8094659 Arad Jan 2012 B1
8111692 Ray Feb 2012 B2
8141156 Mao et al. Mar 2012 B1
8224971 Miller et al. Jul 2012 B1
8228928 Parandekar et al. Jul 2012 B2
8243589 Trost et al. Aug 2012 B1
8259566 Chen et al. Sep 2012 B2
8274891 Averi et al. Sep 2012 B2
8301749 Finklestein et al. Oct 2012 B1
8385227 Downey Feb 2013 B1
8516129 Skene Aug 2013 B1
8566452 Goodwin et al. Oct 2013 B1
8588066 Goel Nov 2013 B2
8630291 Shaffer et al. Jan 2014 B2
8661295 Khanna et al. Feb 2014 B1
8724456 Hong et al. May 2014 B1
8724503 Johnsson et al. May 2014 B2
8745177 Kazerani et al. Jun 2014 B1
8797874 Yu et al. Aug 2014 B2
8799504 Capone et al. Aug 2014 B2
8804745 Sinn Aug 2014 B1
8806482 Nagargadde et al. Aug 2014 B1
8855071 Sankaran et al. Oct 2014 B1
8856339 Mestery et al. Oct 2014 B2
8964548 Keralapura et al. Feb 2015 B1
8989199 Sella et al. Mar 2015 B1
9009217 Nagargadde et al. Apr 2015 B1
9015299 Shah Apr 2015 B1
9055000 Ghosh et al. Jun 2015 B1
9060025 Xu Jun 2015 B2
9071607 Twitchell, Jr. Jun 2015 B2
9075771 Gawali et al. Jul 2015 B1
9100329 Jiang et al. Aug 2015 B1
9135037 Petrescu-Prahova et al. Sep 2015 B1
9137334 Zhou Sep 2015 B2
9154327 Marino et al. Oct 2015 B1
9203764 Shirazipour et al. Dec 2015 B2
9225591 Beheshti-Zavareh et al. Dec 2015 B2
9306949 Richard et al. Apr 2016 B1
9323561 Ayala et al. Apr 2016 B2
9336040 Dong et al. May 2016 B2
9354983 Yenamandra et al. May 2016 B1
9356943 Lopilato et al. May 2016 B1
9379981 Zhou et al. Jun 2016 B1
9413724 Xu Aug 2016 B2
9419878 Hsiao et al. Aug 2016 B2
9432245 Sorenson et al. Aug 2016 B1
9438566 Zhang et al. Sep 2016 B2
9450817 Bahadur et al. Sep 2016 B1
9450852 Chen et al. Sep 2016 B1
9462010 Stevenson Oct 2016 B1
9467478 Khan et al. Oct 2016 B1
9485163 Fries et al. Nov 2016 B1
9521067 Michael et al. Dec 2016 B2
9525564 Lee Dec 2016 B2
9559951 Sajassi et al. Jan 2017 B1
9563423 Pittman Feb 2017 B1
9602389 Maveli et al. Mar 2017 B1
9608917 Anderson et al. Mar 2017 B1
9608962 Chang Mar 2017 B1
9614748 Battersby et al. Apr 2017 B1
9621460 Mehta et al. Apr 2017 B2
9641551 Kariyanahalli May 2017 B1
9648547 Hart et al. May 2017 B1
9665432 Kruse et al. May 2017 B2
9686127 Ramachandran et al. Jun 2017 B2
9715401 Devine et al. Jul 2017 B2
9717021 Hughes et al. Jul 2017 B2
9722815 Mukundan et al. Aug 2017 B2
9747249 Cherian et al. Aug 2017 B2
9755965 Yadav et al. Sep 2017 B1
9787559 Schroeder Oct 2017 B1
9807004 Koley et al. Oct 2017 B2
9819540 Bahadur et al. Nov 2017 B1
9819565 Djukic et al. Nov 2017 B2
9825822 Holland Nov 2017 B1
9825911 Brandwine Nov 2017 B1
9825992 Xu Nov 2017 B2
9832128 Ashner et al. Nov 2017 B1
9832205 Santhi et al. Nov 2017 B2
9875355 Williams Jan 2018 B1
9906401 Rao Feb 2018 B1
9923826 Murgia Mar 2018 B2
9930011 Clemons, Jr. et al. Mar 2018 B1
9935829 Miller et al. Apr 2018 B1
9942787 Tillotson Apr 2018 B1
9996370 Khafizov et al. Jun 2018 B1
10038601 Becker et al. Jul 2018 B1
10057183 Salle et al. Aug 2018 B2
10057294 Xu Aug 2018 B2
10116593 Sinn et al. Oct 2018 B1
10135789 Mayya et al. Nov 2018 B2
10142226 Wu et al. Nov 2018 B1
10178032 Freitas Jan 2019 B1
10178037 Appleby et al. Jan 2019 B2
10187289 Chen et al. Jan 2019 B1
10200264 Menon et al. Feb 2019 B2
10229017 Zou et al. Mar 2019 B1
10237123 Dubey et al. Mar 2019 B2
10250498 Bales et al. Apr 2019 B1
10263832 Ghosh Apr 2019 B1
10320664 Nainar et al. Jun 2019 B2
10320691 Matthews et al. Jun 2019 B1
10326830 Singh Jun 2019 B1
10348767 Lee et al. Jul 2019 B1
10355989 Panchal et al. Jul 2019 B1
10425382 Mayya et al. Sep 2019 B2
10454708 Mibu Oct 2019 B2
10454714 Mayya et al. Oct 2019 B2
10461993 Turabi et al. Oct 2019 B2
10498652 Mayya et al. Dec 2019 B2
10511546 Singarayan et al. Dec 2019 B2
10523539 Mayya et al. Dec 2019 B2
10550093 Ojima et al. Feb 2020 B2
10554538 Spohn et al. Feb 2020 B2
10560431 Chen et al. Feb 2020 B1
10565464 Han et al. Feb 2020 B2
10567519 Mukhopadhyaya et al. Feb 2020 B1
10574482 Oré et al. Feb 2020 B2
10574528 Mayya et al. Feb 2020 B2
10594516 Cidon et al. Mar 2020 B2
10594591 Houjyo et al. Mar 2020 B2
10594659 El-Moussa et al. Mar 2020 B2
10608844 Cidon et al. Mar 2020 B2
10630505 Rubenstein et al. Apr 2020 B2
10637889 Ermagan et al. Apr 2020 B2
10666460 Cidon et al. May 2020 B2
10666497 Tahhan et al. May 2020 B2
10686625 Cidon et al. Jun 2020 B2
10693739 Naseri et al. Jun 2020 B1
10708144 Mohan Jul 2020 B2
10715427 Raj et al. Jul 2020 B2
10749711 Mukundan et al. Aug 2020 B2
10778466 Cidon et al. Sep 2020 B2
10778528 Mayya et al. Sep 2020 B2
10778557 Ganichev et al. Sep 2020 B2
10805114 Cidon et al. Oct 2020 B2
10805272 Mayya et al. Oct 2020 B2
10819564 Turabi et al. Oct 2020 B2
10826775 Moreno et al. Nov 2020 B1
10841131 Cidon et al. Nov 2020 B2
10911374 Kumar et al. Feb 2021 B1
10938693 Mayya et al. Mar 2021 B2
10951529 Duan et al. Mar 2021 B2
10958479 Cidon et al. Mar 2021 B2
10959098 Cidon et al. Mar 2021 B2
10992558 Silva et al. Apr 2021 B1
10992568 Michael et al. Apr 2021 B2
10999100 Cidon et al. May 2021 B2
10999137 Cidon et al. May 2021 B2
10999165 Cidon et al. May 2021 B2
10999197 Hooda et al. May 2021 B2
11005684 Cidon et al. May 2021 B2
11018995 Cidon et al. May 2021 B2
11044190 Ramaswamy et al. Jun 2021 B2
11050588 Mayya et al. Jun 2021 B2
11050644 Hegde et al. Jun 2021 B2
11071005 Shen et al. Jul 2021 B2
11089111 Markuze et al. Aug 2021 B2
11095612 Oswal et al. Aug 2021 B1
11102032 Cidon et al. Aug 2021 B2
11108595 Knutsen et al. Aug 2021 B2
11108851 Kurmala et al. Aug 2021 B1
11115347 Gupta et al. Sep 2021 B2
11115426 Pazhyannur et al. Sep 2021 B1
11115480 Markuze et al. Sep 2021 B2
11121962 Michael et al. Sep 2021 B2
11121985 Cidon et al. Sep 2021 B2
11128492 Sethi et al. Sep 2021 B2
11146632 Rubenstein Oct 2021 B2
11153230 Cidon et al. Oct 2021 B2
11171885 Cidon et al. Nov 2021 B2
11212140 Mukundan et al. Dec 2021 B2
11212238 Cidon et al. Dec 2021 B2
11223514 Mayya et al. Jan 2022 B2
11245641 Ramaswamy et al. Feb 2022 B2
11252079 Michael et al. Feb 2022 B2
11252105 Cidon et al. Feb 2022 B2
11252106 Cidon et al. Feb 2022 B2
11258728 Cidon et al. Feb 2022 B2
11310170 Cidon et al. Apr 2022 B2
11323307 Mayya et al. May 2022 B2
11349722 Mayya et al. May 2022 B2
11363124 Markuze et al. Jun 2022 B2
11374904 Mayya et al. Jun 2022 B2
11375005 Rolando et al. Jun 2022 B1
11381474 Kumar et al. Jul 2022 B1
11381499 Ramaswamy et al. Jul 2022 B1
11388086 Ramaswamy Jul 2022 B1
11394640 Ramaswamy et al. Jul 2022 B2
11418997 Devadoss et al. Aug 2022 B2
11438789 Devadoss et al. Sep 2022 B2
11444865 Ramaswamy et al. Sep 2022 B2
11444872 Mayya et al. Sep 2022 B2
11477127 Ramaswamy et al. Oct 2022 B2
11489720 Kempanna et al. Nov 2022 B1
11489783 Ramaswamy et al. Nov 2022 B2
11509571 Ramaswamy et al. Nov 2022 B1
11516049 Cidon et al. Nov 2022 B2
11522780 Wallace et al. Dec 2022 B1
11533248 Mayya et al. Dec 2022 B2
11552874 Pragada et al. Jan 2023 B1
11575591 Ramaswamy et al. Feb 2023 B2
11575600 Markuze et al. Feb 2023 B2
11582144 Ramaswamy et al. Feb 2023 B2
11582298 Hood et al. Feb 2023 B2
11601356 Gandhi et al. Mar 2023 B2
11606225 Cidon et al. Mar 2023 B2
11606286 Michael et al. Mar 2023 B2
11606314 Cidon et al. Mar 2023 B2
11606712 Devadoss et al. Mar 2023 B2
11611507 Ramaswamy et al. Mar 2023 B2
20020075542 Kumar et al. Jun 2002 A1
20020085488 Kobayashi Jul 2002 A1
20020087716 Mustafa Jul 2002 A1
20020152306 Tuck Oct 2002 A1
20020186682 Kawano et al. Dec 2002 A1
20020198840 Banka et al. Dec 2002 A1
20030050061 Wu et al. Mar 2003 A1
20030061269 Hathaway et al. Mar 2003 A1
20030088697 Matsuhira May 2003 A1
20030112766 Riedel et al. Jun 2003 A1
20030112808 Solomon Jun 2003 A1
20030126468 Markham Jul 2003 A1
20030161313 Jinmei et al. Aug 2003 A1
20030189919 Gupta et al. Oct 2003 A1
20030202506 Perkins et al. Oct 2003 A1
20030219030 Gubbi Nov 2003 A1
20040059831 Chu et al. Mar 2004 A1
20040068668 Lor et al. Apr 2004 A1
20040165601 Liu et al. Aug 2004 A1
20040224771 Chen et al. Nov 2004 A1
20050078690 DeLangis Apr 2005 A1
20050149604 Navada Jul 2005 A1
20050154790 Nagata et al. Jul 2005 A1
20050172161 Cruz et al. Aug 2005 A1
20050195754 Nosella Sep 2005 A1
20050210479 Andjelic Sep 2005 A1
20050265255 Kodialam et al. Dec 2005 A1
20060002291 Alicherry et al. Jan 2006 A1
20060114838 Mandavilli et al. Jun 2006 A1
20060171365 Borella Aug 2006 A1
20060182034 Klinker et al. Aug 2006 A1
20060182035 Vasseur Aug 2006 A1
20060193247 Naseh et al. Aug 2006 A1
20060193252 Naseh et al. Aug 2006 A1
20070050594 Augsburg et al. Mar 2007 A1
20070064604 Chen et al. Mar 2007 A1
20070064702 Bates et al. Mar 2007 A1
20070083727 Johnston et al. Apr 2007 A1
20070091794 Filsfils et al. Apr 2007 A1
20070103548 Carter May 2007 A1
20070115812 Hughes May 2007 A1
20070121486 Guichard et al. May 2007 A1
20070130325 Esser Jun 2007 A1
20070162619 Aloni et al. Jul 2007 A1
20070162639 Chu et al. Jul 2007 A1
20070177511 Das et al. Aug 2007 A1
20070237081 Kodialam et al. Oct 2007 A1
20070260746 Mirtorabi et al. Nov 2007 A1
20070268882 Breslau et al. Nov 2007 A1
20080002670 Bugenhagen et al. Jan 2008 A1
20080049621 McGuire et al. Feb 2008 A1
20080055241 Goldenberg et al. Mar 2008 A1
20080080509 Khanna et al. Apr 2008 A1
20080095187 Jung et al. Apr 2008 A1
20080117930 Chakareski et al. May 2008 A1
20080144532 Chamarajanagar et al. Jun 2008 A1
20080175150 Bolt et al. Jul 2008 A1
20080181116 Kavanaugh et al. Jul 2008 A1
20080219276 Shah Sep 2008 A1
20080240121 Xiong et al. Oct 2008 A1
20080263218 Beerends et al. Oct 2008 A1
20090013210 McIntosh et al. Jan 2009 A1
20090028092 Rothschild Jan 2009 A1
20090125617 Klessig et al. May 2009 A1
20090141642 Sun Jun 2009 A1
20090154463 Tines et al. Jun 2009 A1
20090182874 Morford et al. Jul 2009 A1
20090247204 Sennett et al. Oct 2009 A1
20090268605 Campbell et al. Oct 2009 A1
20090274045 Meier et al. Nov 2009 A1
20090276657 Wetmore et al. Nov 2009 A1
20090303880 Maltz et al. Dec 2009 A1
20100008361 Guichard et al. Jan 2010 A1
20100017802 Lojewski Jan 2010 A1
20100046532 Okita Feb 2010 A1
20100061379 Parandekar et al. Mar 2010 A1
20100080129 Strahan et al. Apr 2010 A1
20100088440 Banks et al. Apr 2010 A1
20100091782 Hiscock Apr 2010 A1
20100091823 Retana et al. Apr 2010 A1
20100107162 Edwards et al. Apr 2010 A1
20100118727 Draves et al. May 2010 A1
20100118886 Saavedra May 2010 A1
20100165985 Sharma et al. Jul 2010 A1
20100191884 Holenstein et al. Jul 2010 A1
20100223621 Joshi et al. Sep 2010 A1
20100226246 Proulx Sep 2010 A1
20100290422 Haigh et al. Nov 2010 A1
20100309841 Conte Dec 2010 A1
20100309912 Mehta et al. Dec 2010 A1
20100322255 Hao et al. Dec 2010 A1
20100332657 Elyashev et al. Dec 2010 A1
20110007752 Silva et al. Jan 2011 A1
20110032939 Nozaki et al. Feb 2011 A1
20110040814 Higgins Feb 2011 A1
20110075674 Li et al. Mar 2011 A1
20110078783 Duan et al. Mar 2011 A1
20110107139 Middlecamp et al. May 2011 A1
20110110370 Moreno et al. May 2011 A1
20110141877 Xu et al. Jun 2011 A1
20110142041 Imai Jun 2011 A1
20110153909 Dong Jun 2011 A1
20110235509 Szymanski Sep 2011 A1
20110255397 Kadakia et al. Oct 2011 A1
20110302663 Prodan et al. Dec 2011 A1
20120008630 Ould-Brahim Jan 2012 A1
20120027013 Napierala Feb 2012 A1
20120039309 Evans et al. Feb 2012 A1
20120099601 Haddad et al. Apr 2012 A1
20120136697 Peles et al. May 2012 A1
20120140935 Kruglick Jun 2012 A1
20120157068 Eichen et al. Jun 2012 A1
20120173694 Yan et al. Jul 2012 A1
20120173919 Patel et al. Jul 2012 A1
20120182940 Taleb et al. Jul 2012 A1
20120221955 Raleigh et al. Aug 2012 A1
20120227093 Shatzkamer et al. Sep 2012 A1
20120240185 Kapoor et al. Sep 2012 A1
20120250686 Vincent et al. Oct 2012 A1
20120266026 Chikkalingaiah et al. Oct 2012 A1
20120281706 Agarwal et al. Nov 2012 A1
20120287818 Corti et al. Nov 2012 A1
20120300615 Kempf et al. Nov 2012 A1
20120307659 Yamada Dec 2012 A1
20120317270 Vrbaski et al. Dec 2012 A1
20120317291 Wolfe Dec 2012 A1
20130019005 Hui et al. Jan 2013 A1
20130021968 Reznik et al. Jan 2013 A1
20130044764 Casado et al. Feb 2013 A1
20130051237 Ong Feb 2013 A1
20130051399 Zhang et al. Feb 2013 A1
20130054763 Merwe et al. Feb 2013 A1
20130086267 Gelenbe et al. Apr 2013 A1
20130097304 Asthana et al. Apr 2013 A1
20130103729 Cooney et al. Apr 2013 A1
20130103834 Dzerve et al. Apr 2013 A1
20130117530 Kim et al. May 2013 A1
20130124718 Griffith et al. May 2013 A1
20130124911 Griffith et al. May 2013 A1
20130124912 Griffith et al. May 2013 A1
20130128889 Mathur et al. May 2013 A1
20130142201 Kim et al. Jun 2013 A1
20130170354 Takashima et al. Jul 2013 A1
20130173788 Song Jul 2013 A1
20130182712 Aguayo et al. Jul 2013 A1
20130185729 Vasic et al. Jul 2013 A1
20130191688 Agarwal et al. Jul 2013 A1
20130223226 Narayanan et al. Aug 2013 A1
20130223454 Dunbar et al. Aug 2013 A1
20130238782 Zhao et al. Sep 2013 A1
20130242718 Zhang Sep 2013 A1
20130254599 Katkar et al. Sep 2013 A1
20130258839 Wang et al. Oct 2013 A1
20130258847 Zhang et al. Oct 2013 A1
20130266015 Qu et al. Oct 2013 A1
20130266019 Qu et al. Oct 2013 A1
20130283364 Chang et al. Oct 2013 A1
20130286846 Atlas et al. Oct 2013 A1
20130297611 Moritz et al. Nov 2013 A1
20130297770 Zhang Nov 2013 A1
20130301469 Suga Nov 2013 A1
20130301642 Radhakrishnan et al. Nov 2013 A1
20130308444 Sem-Jacobsen et al. Nov 2013 A1
20130315242 Wang et al. Nov 2013 A1
20130315243 Huang et al. Nov 2013 A1
20130329548 Nakil et al. Dec 2013 A1
20130329601 Yin et al. Dec 2013 A1
20130329734 Chesla et al. Dec 2013 A1
20130346470 Obstfeld et al. Dec 2013 A1
20140016464 Shirazipour et al. Jan 2014 A1
20140019604 Twitchell, Jr. Jan 2014 A1
20140019750 Dodgson et al. Jan 2014 A1
20140040975 Raleigh et al. Feb 2014 A1
20140064283 Balus et al. Mar 2014 A1
20140071832 Johnsson et al. Mar 2014 A1
20140092907 Sridhar et al. Apr 2014 A1
20140108665 Arora et al. Apr 2014 A1
20140112171 Pasdar Apr 2014 A1
20140115584 Mudigonda et al. Apr 2014 A1
20140122559 Branson et al. May 2014 A1
20140123135 Huang et al. May 2014 A1
20140126418 Brendel et al. May 2014 A1
20140156818 Hunt Jun 2014 A1
20140156823 Liu et al. Jun 2014 A1
20140160935 Zecharia et al. Jun 2014 A1
20140164560 Ko et al. Jun 2014 A1
20140164617 Jalan et al. Jun 2014 A1
20140164718 Schaik et al. Jun 2014 A1
20140173113 Vemuri et al. Jun 2014 A1
20140173331 Martin et al. Jun 2014 A1
20140181824 Saund et al. Jun 2014 A1
20140189074 Parker Jul 2014 A1
20140208317 Nakagawa Jul 2014 A1
20140219135 Li et al. Aug 2014 A1
20140223507 Xu Aug 2014 A1
20140229210 Sharifian et al. Aug 2014 A1
20140244851 Lee Aug 2014 A1
20140258535 Zhang Sep 2014 A1
20140269690 Tu Sep 2014 A1
20140279862 Dietz et al. Sep 2014 A1
20140280499 Basavaiah et al. Sep 2014 A1
20140317440 Biermayr et al. Oct 2014 A1
20140321277 Lynn, Jr. et al. Oct 2014 A1
20140337500 Lee Nov 2014 A1
20140337674 Ivancic et al. Nov 2014 A1
20140341109 Cartmell et al. Nov 2014 A1
20140355441 Jain Dec 2014 A1
20140365834 Stone et al. Dec 2014 A1
20140372582 Ghanwani et al. Dec 2014 A1
20150003240 Drwiega et al. Jan 2015 A1
20150016249 Mukundan et al. Jan 2015 A1
20150029864 Raileanu et al. Jan 2015 A1
20150039744 Niazi et al. Feb 2015 A1
20150046572 Cheng et al. Feb 2015 A1
20150052247 Threefoot et al. Feb 2015 A1
20150052517 Raghu et al. Feb 2015 A1
20150056960 Egner et al. Feb 2015 A1
20150058917 Xu Feb 2015 A1
20150088942 Shah Mar 2015 A1
20150089628 Lang Mar 2015 A1
20150092603 Aguayo et al. Apr 2015 A1
20150096011 Watt Apr 2015 A1
20150100958 Banavalikar et al. Apr 2015 A1
20150124603 Ketheesan et al. May 2015 A1
20150134777 Onoue May 2015 A1
20150139238 Pourzandi et al. May 2015 A1
20150146539 Mehta et al. May 2015 A1
20150163152 Li Jun 2015 A1
20150169340 Haddad et al. Jun 2015 A1
20150172121 Farkas et al. Jun 2015 A1
20150172169 DeCusatis et al. Jun 2015 A1
20150188823 Williams et al. Jul 2015 A1
20150189009 Bemmel Jul 2015 A1
20150195178 Bhattacharya et al. Jul 2015 A1
20150201036 Nishiki et al. Jul 2015 A1
20150222543 Song Aug 2015 A1
20150222638 Morley Aug 2015 A1
20150236945 Michael et al. Aug 2015 A1
20150236962 Veres et al. Aug 2015 A1
20150244617 Nakil et al. Aug 2015 A1
20150249644 Xu Sep 2015 A1
20150257081 Ramanujan et al. Sep 2015 A1
20150271056 Chunduri et al. Sep 2015 A1
20150271104 Chikkamath et al. Sep 2015 A1
20150271303 Neginhal et al. Sep 2015 A1
20150281004 Kakadia et al. Oct 2015 A1
20150312142 Barabash et al. Oct 2015 A1
20150312760 O'Toole Oct 2015 A1
20150317169 Sinha et al. Nov 2015 A1
20150326426 Luo et al. Nov 2015 A1
20150334025 Rader Nov 2015 A1
20150334696 Gu et al. Nov 2015 A1
20150341271 Gomez Nov 2015 A1
20150349978 Wu et al. Dec 2015 A1
20150350907 Timariu et al. Dec 2015 A1
20150358232 Chen et al. Dec 2015 A1
20150358236 Roach et al. Dec 2015 A1
20150363221 Terayama et al. Dec 2015 A1
20150363733 Brown Dec 2015 A1
20150365323 Duminuco et al. Dec 2015 A1
20150372943 Hasan et al. Dec 2015 A1
20150372982 Herle et al. Dec 2015 A1
20150381407 Wang et al. Dec 2015 A1
20150381493 Bansal et al. Dec 2015 A1
20160020844 Hart et al. Jan 2016 A1
20160021597 Hart et al. Jan 2016 A1
20160035183 Buchholz et al. Feb 2016 A1
20160036924 Koppolu et al. Feb 2016 A1
20160036938 Aviles et al. Feb 2016 A1
20160037434 Gopal et al. Feb 2016 A1
20160072669 Saavedra Mar 2016 A1
20160072684 Manuguri et al. Mar 2016 A1
20160080268 Anand et al. Mar 2016 A1
20160080502 Yadav et al. Mar 2016 A1
20160105353 Cociglio Apr 2016 A1
20160105392 Thakkar et al. Apr 2016 A1
20160105471 Nunes et al. Apr 2016 A1
20160105488 Thakkar et al. Apr 2016 A1
20160117185 Fang et al. Apr 2016 A1
20160134461 Sampath et al. May 2016 A1
20160134527 Kwak et al. May 2016 A1
20160134528 Lin et al. May 2016 A1
20160134591 Liao et al. May 2016 A1
20160142373 Ossipov May 2016 A1
20160150055 Choi May 2016 A1
20160164832 Bellagamba et al. Jun 2016 A1
20160164914 Madhav et al. Jun 2016 A1
20160173338 Wolting Jun 2016 A1
20160191363 Haraszti et al. Jun 2016 A1
20160191374 Singh et al. Jun 2016 A1
20160192403 Gupta et al. Jun 2016 A1
20160197834 Luft Jul 2016 A1
20160197835 Luft Jul 2016 A1
20160198003 Luft Jul 2016 A1
20160205071 Cooper et al. Jul 2016 A1
20160210209 Verkaik et al. Jul 2016 A1
20160212773 Kanderholm et al. Jul 2016 A1
20160218947 Hughes et al. Jul 2016 A1
20160218951 Vasseur et al. Jul 2016 A1
20160234099 Jiao Aug 2016 A1
20160255169 Kovvuri et al. Sep 2016 A1
20160255542 Hughes et al. Sep 2016 A1
20160261493 Li Sep 2016 A1
20160261495 Xia et al. Sep 2016 A1
20160261506 Hegde et al. Sep 2016 A1
20160261639 Xu Sep 2016 A1
20160269298 Li et al. Sep 2016 A1
20160269926 Sundaram Sep 2016 A1
20160285736 Gu Sep 2016 A1
20160299775 Madapurath et al. Oct 2016 A1
20160301471 Kunz et al. Oct 2016 A1
20160308762 Teng et al. Oct 2016 A1
20160315912 Mayya et al. Oct 2016 A1
20160323377 Einkauf et al. Nov 2016 A1
20160328159 Coddington et al. Nov 2016 A1
20160330111 Manghirmalani et al. Nov 2016 A1
20160337202 Ben-Itzhak et al. Nov 2016 A1
20160352588 Subbarayan et al. Dec 2016 A1
20160353268 Senarath et al. Dec 2016 A1
20160359738 Sullenberger et al. Dec 2016 A1
20160366187 Kamble Dec 2016 A1
20160371153 Dornemann Dec 2016 A1
20160378527 Zamir Dec 2016 A1
20160380886 Blair et al. Dec 2016 A1
20160380906 Hodique et al. Dec 2016 A1
20170005986 Bansal et al. Jan 2017 A1
20170006499 Hampel et al. Jan 2017 A1
20170012870 Blair et al. Jan 2017 A1
20170019428 Cohn Jan 2017 A1
20170026273 Yao et al. Jan 2017 A1
20170026283 Williams et al. Jan 2017 A1
20170026355 Mathaiyan et al. Jan 2017 A1
20170034046 Cai et al. Feb 2017 A1
20170034052 Chanda et al. Feb 2017 A1
20170034129 Sawant et al. Feb 2017 A1
20170048296 Ramalho et al. Feb 2017 A1
20170053258 Camey et al. Feb 2017 A1
20170055131 Kong et al. Feb 2017 A1
20170063674 Maskalik et al. Mar 2017 A1
20170063782 Jain et al. Mar 2017 A1
20170063783 Yong et al. Mar 2017 A1
20170063794 Jain et al. Mar 2017 A1
20170064005 Lee Mar 2017 A1
20170075710 Prasad et al. Mar 2017 A1
20170093625 Pera et al. Mar 2017 A1
20170097841 Chang et al. Apr 2017 A1
20170104653 Badea et al. Apr 2017 A1
20170104755 Arregoces et al. Apr 2017 A1
20170109212 Gaurav et al. Apr 2017 A1
20170118067 Vedula Apr 2017 A1
20170118173 Arramreddy et al. Apr 2017 A1
20170123939 Maheshwari et al. May 2017 A1
20170126475 Mahkonen et al. May 2017 A1
20170126516 Tiagi et al. May 2017 A1
20170126564 Mayya et al. May 2017 A1
20170134186 Mukundan et al. May 2017 A1
20170134520 Abbasi et al. May 2017 A1
20170139789 Fries et al. May 2017 A1
20170142000 Cai et al. May 2017 A1
20170149637 Banikazemi et al. May 2017 A1
20170155557 Desai et al. Jun 2017 A1
20170155590 Dillon et al. Jun 2017 A1
20170163473 Sadana et al. Jun 2017 A1
20170171310 Gardner Jun 2017 A1
20170180220 Leckey et al. Jun 2017 A1
20170181210 Nadella et al. Jun 2017 A1
20170195161 Ruel et al. Jul 2017 A1
20170195169 Mills et al. Jul 2017 A1
20170201585 Doraiswamy et al. Jul 2017 A1
20170207976 Rovner et al. Jul 2017 A1
20170214545 Cheng et al. Jul 2017 A1
20170214701 Hasan Jul 2017 A1
20170223117 Messerli et al. Aug 2017 A1
20170236060 Ignatyev Aug 2017 A1
20170237710 Mayya et al. Aug 2017 A1
20170242784 Heorhiadi et al. Aug 2017 A1
20170257260 Govindan et al. Sep 2017 A1
20170257309 Appanna Sep 2017 A1
20170264496 Ao et al. Sep 2017 A1
20170279717 Bethers et al. Sep 2017 A1
20170279741 Elias et al. Sep 2017 A1
20170279803 Desai et al. Sep 2017 A1
20170280474 Vesterinen et al. Sep 2017 A1
20170288987 Pasupathy et al. Oct 2017 A1
20170289002 Ganguli et al. Oct 2017 A1
20170289027 Ratnasingham Oct 2017 A1
20170295264 Touitou et al. Oct 2017 A1
20170302501 Shi et al. Oct 2017 A1
20170302565 Ghobadi et al. Oct 2017 A1
20170310641 Jiang et al. Oct 2017 A1
20170310691 Vasseur et al. Oct 2017 A1
20170317954 Masurekar et al. Nov 2017 A1
20170317969 Masurekar et al. Nov 2017 A1
20170317974 Masurekar et al. Nov 2017 A1
20170324628 Dhanabalan Nov 2017 A1
20170337086 Zhu et al. Nov 2017 A1
20170339022 Hegde et al. Nov 2017 A1
20170339054 Yadav et al. Nov 2017 A1
20170339070 Chang et al. Nov 2017 A1
20170346722 Smith et al. Nov 2017 A1
20170364419 Lo Dec 2017 A1
20170366445 Nemirovsky et al. Dec 2017 A1
20170366467 Martin et al. Dec 2017 A1
20170373950 Szilagyi et al. Dec 2017 A1
20170374174 Evens et al. Dec 2017 A1
20180006995 Bickhart et al. Jan 2018 A1
20180007005 Chanda et al. Jan 2018 A1
20180007123 Cheng et al. Jan 2018 A1
20180013636 Seetharamaiah et al. Jan 2018 A1
20180014051 Phillips et al. Jan 2018 A1
20180020035 Boggia et al. Jan 2018 A1
20180034668 Mayya et al. Feb 2018 A1
20180041425 Zhang Feb 2018 A1
20180062875 Tumuluru Mar 2018 A1
20180062914 Boutros et al. Mar 2018 A1
20180062917 Chandrashekhar et al. Mar 2018 A1
20180063036 Chandrashekhar et al. Mar 2018 A1
20180063193 Chandrashekhar et al. Mar 2018 A1
20180063233 Park Mar 2018 A1
20180063743 Tumuluru et al. Mar 2018 A1
20180069924 Tumuluru et al. Mar 2018 A1
20180074909 Bishop et al. Mar 2018 A1
20180077081 Lauer et al. Mar 2018 A1
20180077202 Xu Mar 2018 A1
20180084081 Kuchibhotla et al. Mar 2018 A1
20180097725 Wood et al. Apr 2018 A1
20180114569 Strachan et al. Apr 2018 A1
20180123910 Fitzgibbon May 2018 A1
20180123946 Ramachandran et al. May 2018 A1
20180131608 Jiang et al. May 2018 A1
20180131615 Zhang May 2018 A1
20180131720 Hobson et al. May 2018 A1
20180145899 Rao May 2018 A1
20180159796 Wang et al. Jun 2018 A1
20180159856 Gujarathi Jun 2018 A1
20180167378 Kostyukov et al. Jun 2018 A1
20180176073 Dubey et al. Jun 2018 A1
20180176082 Katz et al. Jun 2018 A1
20180176130 Banerjee et al. Jun 2018 A1
20180176252 Nimmagadda et al. Jun 2018 A1
20180181423 Gunda et al. Jun 2018 A1
20180205746 Boutnaru et al. Jul 2018 A1
20180213472 Ishii et al. Jul 2018 A1
20180219765 Michael et al. Aug 2018 A1
20180219766 Michael et al. Aug 2018 A1
20180234300 Mayya et al. Aug 2018 A1
20180248790 Tan et al. Aug 2018 A1
20180260125 Botes et al. Sep 2018 A1
20180261085 Liu et al. Sep 2018 A1
20180262468 Kumar et al. Sep 2018 A1
20180270104 Zheng et al. Sep 2018 A1
20180278541 Wu et al. Sep 2018 A1
20180287907 Kulshreshtha et al. Oct 2018 A1
20180295101 Gehrmann Oct 2018 A1
20180295529 Jen et al. Oct 2018 A1
20180302286 Mayya et al. Oct 2018 A1
20180302321 Manthiramoorthy et al. Oct 2018 A1
20180307851 Lewis Oct 2018 A1
20180316606 Sung et al. Nov 2018 A1
20180351855 Sood et al. Dec 2018 A1
20180351862 Jeganathan et al. Dec 2018 A1
20180351863 Vairavakkalai et al. Dec 2018 A1
20180351882 Jeganathan et al. Dec 2018 A1
20180367445 Bajaj Dec 2018 A1
20180373558 Chang et al. Dec 2018 A1
20180375744 Mayya et al. Dec 2018 A1
20180375824 Mayya et al. Dec 2018 A1
20180375967 Pithawala et al. Dec 2018 A1
20190013883 Vargas et al. Jan 2019 A1
20190014038 Ritchie Jan 2019 A1
20190020588 Twitchell, Jr. Jan 2019 A1
20190020627 Yuan Jan 2019 A1
20190028378 Houjyo et al. Jan 2019 A1
20190028552 Johnson et al. Jan 2019 A1
20190036808 Shenoy et al. Jan 2019 A1
20190036810 Michael et al. Jan 2019 A1
20190036813 Shenoy et al. Jan 2019 A1
20190046056 Khachaturian et al. Feb 2019 A1
20190058657 Chunduri et al. Feb 2019 A1
20190058709 Kempf et al. Feb 2019 A1
20190068470 Mirsky Feb 2019 A1
20190068493 Ram et al. Feb 2019 A1
20190068500 Hira Feb 2019 A1
20190075083 Mayya et al. Mar 2019 A1
20190103990 Cidon et al. Apr 2019 A1
20190103991 Cidon et al. Apr 2019 A1
20190103992 Cidon et al. Apr 2019 A1
20190103993 Cidon et al. Apr 2019 A1
20190104035 Cidon et al. Apr 2019 A1
20190104049 Cidon et al. Apr 2019 A1
20190104050 Cidon et al. Apr 2019 A1
20190104051 Cidon et al. Apr 2019 A1
20190104052 Cidon et al. Apr 2019 A1
20190104053 Cidon et al. Apr 2019 A1
20190104063 Cidon et al. Apr 2019 A1
20190104064 Cidon et al. Apr 2019 A1
20190104109 Cidon et al. Apr 2019 A1
20190104111 Cidon et al. Apr 2019 A1
20190104413 Cidon et al. Apr 2019 A1
20190109769 Jain et al. Apr 2019 A1
20190132221 Boutros et al. May 2019 A1
20190132234 Dong et al. May 2019 A1
20190132322 Song et al. May 2019 A1
20190140889 Mayya et al. May 2019 A1
20190140890 Mayya et al. May 2019 A1
20190149525 Gunda et al. May 2019 A1
20190158371 Dillon et al. May 2019 A1
20190158605 Markuze et al. May 2019 A1
20190199539 Deng et al. Jun 2019 A1
20190220703 Prakash et al. Jul 2019 A1
20190238364 Boutros et al. Aug 2019 A1
20190238446 Barzik et al. Aug 2019 A1
20190238449 Michael et al. Aug 2019 A1
20190238450 Michael et al. Aug 2019 A1
20190238483 Marichetty et al. Aug 2019 A1
20190268421 Markuze et al. Aug 2019 A1
20190268973 Bull et al. Aug 2019 A1
20190278631 Bernat et al. Sep 2019 A1
20190280962 Michael et al. Sep 2019 A1
20190280963 Michael et al. Sep 2019 A1
20190280964 Michael et al. Sep 2019 A1
20190288875 Shen et al. Sep 2019 A1
20190306197 Degioanni Oct 2019 A1
20190306282 Masputra et al. Oct 2019 A1
20190313278 Liu Oct 2019 A1
20190313907 Khachaturian et al. Oct 2019 A1
20190319847 Nahar et al. Oct 2019 A1
20190319881 Maskara et al. Oct 2019 A1
20190327109 Guichard et al. Oct 2019 A1
20190334786 Dutta Oct 2019 A1
20190334813 Raj et al. Oct 2019 A1
20190334820 Zhao Oct 2019 A1
20190342201 Singh Nov 2019 A1
20190342219 Liu et al. Nov 2019 A1
20190356736 Narayanaswamy et al. Nov 2019 A1
20190364099 Thakkar et al. Nov 2019 A1
20190364456 Yu Nov 2019 A1
20190372888 Michael Dec 2019 A1
20190372889 Michael et al. Dec 2019 A1
20190372890 Michael et al. Dec 2019 A1
20190394081 Tahhan et al. Dec 2019 A1
20200014609 Hockett et al. Jan 2020 A1
20200014615 Michael Jan 2020 A1
20200014616 Michael et al. Jan 2020 A1
20200014661 Mayya et al. Jan 2020 A1
20200014663 Chen et al. Jan 2020 A1
20200021514 Michael Jan 2020 A1
20200021515 Michael Jan 2020 A1
20200036624 Michael et al. Jan 2020 A1
20200044943 Bor-Yaliniz et al. Feb 2020 A1
20200044969 Hao et al. Feb 2020 A1
20200059420 Abraham Feb 2020 A1
20200059457 Raza et al. Feb 2020 A1
20200059459 Abraham et al. Feb 2020 A1
20200067831 Spraggins et al. Feb 2020 A1
20200092207 Sipra et al. Mar 2020 A1
20200097327 Beyer et al. Mar 2020 A1
20200099625 Yigit et al. Mar 2020 A1
20200099659 Cometto et al. Mar 2020 A1
20200106696 Michael et al. Apr 2020 A1
20200106706 Mayya et al. Apr 2020 A1
20200119952 Mayya et al. Apr 2020 A1
20200127905 Mayya et al. Apr 2020 A1
20200127911 Gilson et al. Apr 2020 A1
20200153701 Mohan May 2020 A1
20200153736 Liebherr et al. May 2020 A1
20200159661 Keymolen et al. May 2020 A1
20200162407 Tillotson May 2020 A1
20200169473 Rimar et al. May 2020 A1
20200177503 Hooda et al. Jun 2020 A1
20200177550 Valluri et al. Jun 2020 A1
20200177629 Hooda Jun 2020 A1
20200186471 Shen et al. Jun 2020 A1
20200195557 Duan Jun 2020 A1
20200204460 Schneider et al. Jun 2020 A1
20200213212 Dillon et al. Jul 2020 A1
20200213224 Cheng et al. Jul 2020 A1
20200218558 Sreenath et al. Jul 2020 A1
20200235990 Janakiraman et al. Jul 2020 A1
20200235999 Mayya et al. Jul 2020 A1
20200236046 Jain et al. Jul 2020 A1
20200241927 Yang et al. Jul 2020 A1
20200244721 S et al. Jul 2020 A1
20200252234 Ramamoorthi et al. Aug 2020 A1
20200259700 Bhalla et al. Aug 2020 A1
20200267184 Vera-Schockner Aug 2020 A1
20200267203 Jindal et al. Aug 2020 A1
20200280587 Janakiraman et al. Sep 2020 A1
20200287819 Theogaraj Sep 2020 A1
20200287976 Theogaraj et al. Sep 2020 A1
20200296011 Jain et al. Sep 2020 A1
20200296026 Michael et al. Sep 2020 A1
20200301764 Thoresen et al. Sep 2020 A1
20200314006 Mackie et al. Oct 2020 A1
20200314614 Moustafa et al. Oct 2020 A1
20200322230 Natal et al. Oct 2020 A1
20200322287 Connor et al. Oct 2020 A1
20200336336 Sethi et al. Oct 2020 A1
20200344089 Motwani et al. Oct 2020 A1
20200344143 Faseela et al. Oct 2020 A1
20200344163 Gupta et al. Oct 2020 A1
20200351188 Arora et al. Nov 2020 A1
20200358878 Bansal et al. Nov 2020 A1
20200366530 Mukundan et al. Nov 2020 A1
20200366562 Mayya et al. Nov 2020 A1
20200382345 Zhao et al. Dec 2020 A1
20200382387 Pasupathy et al. Dec 2020 A1
20200403821 Dev Dec 2020 A1
20200412483 Tan Dec 2020 A1
20200412576 Kondapavuluru et al. Dec 2020 A1
20200413283 Shen et al. Dec 2020 A1
20210006482 Hwang et al. Jan 2021 A1
20210006490 Michael et al. Jan 2021 A1
20210029019 Kottapalli Jan 2021 A1
20210029088 Mayya et al. Jan 2021 A1
20210036888 Makkalla et al. Feb 2021 A1
20210036987 Mishra et al. Feb 2021 A1
20210067372 Cidon et al. Mar 2021 A1
20210067373 Cidon et al. Mar 2021 A1
20210067374 Cidon et al. Mar 2021 A1
20210067375 Cidon et al. Mar 2021 A1
20210067407 Cidon et al. Mar 2021 A1
20210067427 Cidon et al. Mar 2021 A1
20210067442 Sundararajan et al. Mar 2021 A1
20210067461 Cidon et al. Mar 2021 A1
20210067464 Cidon et al. Mar 2021 A1
20210067467 Cidon et al. Mar 2021 A1
20210067468 Cidon et al. Mar 2021 A1
20210073001 Rogers et al. Mar 2021 A1
20210092062 Dhanabalan et al. Mar 2021 A1
20210099360 Parsons Apr 2021 A1
20210105199 H et al. Apr 2021 A1
20210111998 Saavedra Apr 2021 A1
20210112034 Sundararajan et al. Apr 2021 A1
20210126830 R. et al. Apr 2021 A1
20210126853 Ramaswamy et al. Apr 2021 A1
20210126854 Guo et al. Apr 2021 A1
20210126860 Ramaswamy et al. Apr 2021 A1
20210144091 H et al. May 2021 A1
20210160169 Shen et al. May 2021 A1
20210160813 Gupta et al. May 2021 A1
20210176255 Hill et al. Jun 2021 A1
20210184952 Mayya Jun 2021 A1
20210184966 Ramaswamy et al. Jun 2021 A1
20210184983 Ramaswamy et al. Jun 2021 A1
20210194814 Roux et al. Jun 2021 A1
20210226880 Ramamoorthy et al. Jul 2021 A1
20210234728 Cidon et al. Jul 2021 A1
20210234775 Devadoss et al. Jul 2021 A1
20210234786 Devadoss et al. Jul 2021 A1
20210234804 Devadoss et al. Jul 2021 A1
20210234805 Devadoss et al. Jul 2021 A1
20210235312 Devadoss et al. Jul 2021 A1
20210235313 Devadoss et al. Jul 2021 A1
20210266262 Subramanian et al. Aug 2021 A1
20210279069 Salgaonkar et al. Sep 2021 A1
20210314289 Chandrashekhar et al. Oct 2021 A1
20210314385 Pande Oct 2021 A1
20210328835 Mayya et al. Oct 2021 A1
20210336880 Gupta et al. Oct 2021 A1
20210377109 Shrivastava et al. Dec 2021 A1
20210377156 Michael et al. Dec 2021 A1
20210392060 Silva et al. Dec 2021 A1
20210392070 Zad Tootaghaj Dec 2021 A1
20210399920 Sundararajan et al. Dec 2021 A1
20210399978 Michael et al. Dec 2021 A9
20210400113 Markuze et al. Dec 2021 A1
20210400512 Agarwal et al. Dec 2021 A1
20210409277 Jeuk et al. Dec 2021 A1
20220006726 Michael et al. Jan 2022 A1
20220006751 Ramaswamy et al. Jan 2022 A1
20220006756 Ramaswamy et al. Jan 2022 A1
20220029902 Shemer et al. Jan 2022 A1
20220035673 Markuze et al. Feb 2022 A1
20220038370 Vasseur et al. Feb 2022 A1
20220038557 Markuze et al. Feb 2022 A1
20220045927 Liu et al. Feb 2022 A1
20220052928 Sundararajan et al. Feb 2022 A1
20220061059 Dunsmore et al. Feb 2022 A1
20220086035 Devaraj et al. Mar 2022 A1
20220094644 Cidon et al. Mar 2022 A1
20220123961 Mukundan et al. Apr 2022 A1
20220131740 Mayya et al. Apr 2022 A1
20220131807 Srinivas et al. Apr 2022 A1
20220141184 Oswal et al. May 2022 A1
20220158923 Ramaswamy et al. May 2022 A1
20220158924 Ramaswamy et al. May 2022 A1
20220158926 Wennerstrom et al. May 2022 A1
20220166713 Markuze et al. May 2022 A1
20220191719 Roy Jun 2022 A1
20220198229 López et al. Jun 2022 A1
20220210035 Hendrickson et al. Jun 2022 A1
20220210041 Gandhi et al. Jun 2022 A1
20220210042 Gandhi et al. Jun 2022 A1
20220210122 Levin et al. Jun 2022 A1
20220217015 Vuggrala et al. Jul 2022 A1
20220231949 Ramaswamy et al. Jul 2022 A1
20220232411 Vijayakumar et al. Jul 2022 A1
20220239596 Kumar et al. Jul 2022 A1
20220294701 Mayya et al. Sep 2022 A1
20220335027 Seshadri et al. Oct 2022 A1
20220337553 Mayya et al. Oct 2022 A1
20220353152 Ramaswamy Nov 2022 A1
20220353171 Ramaswamy et al. Nov 2022 A1
20220353175 Ramaswamy et al. Nov 2022 A1
20220353182 Ramaswamy et al. Nov 2022 A1
20220353190 Ramaswamy et al. Nov 2022 A1
20220360500 Ramaswamy et al. Nov 2022 A1
20220407773 Kempanna et al. Dec 2022 A1
20220407774 Kempanna et al. Dec 2022 A1
20220407790 Kempanna et al. Dec 2022 A1
20220407820 Kempanna et al. Dec 2022 A1
20220407915 Kempanna et al. Dec 2022 A1
20230006929 Mayya et al. Jan 2023 A1
20230025586 Rolando et al. Jan 2023 A1
20230026330 Rolando et al. Jan 2023 A1
20230026865 Rolando et al. Jan 2023 A1
20230028872 Ramaswamy Jan 2023 A1
20230039869 Ramaswamy et al. Feb 2023 A1
20230041916 Zhang et al. Feb 2023 A1
20230054961 Ramaswamy et al. Feb 2023 A1
Foreign Referenced Citations (49)
Number Date Country
1926809 Mar 2007 CN
102577270 Jul 2012 CN
102811165 Dec 2012 CN
104956329 Sep 2015 CN
106230650 Dec 2016 CN
106656847 May 2017 CN
110447209 Nov 2019 CN
111198764 May 2020 CN
1912381 Apr 2008 EP
2538637 Dec 2012 EP
2763362 Aug 2014 EP
3041178 Jul 2016 EP
3297211 Mar 2018 EP
3509256 Jul 2019 EP
3346650 Nov 2019 EP
2002368792 Dec 2002 JP
2010233126 Oct 2010 JP
2014200010 Oct 2014 JP
2017059991 Mar 2017 JP
2017524290 Aug 2017 JP
20170058201 May 2017 KR
2574350 Feb 2016 RU
03073701 Sep 2003 WO
2005071861 Aug 2005 WO
2007016834 Feb 2007 WO
2012167184 Dec 2012 WO
2015092565 Jun 2015 WO
2016061546 Apr 2016 WO
2016123314 Aug 2016 WO
2017083975 May 2017 WO
2019070611 Apr 2019 WO
2019094522 May 2019 WO
2020012491 Jan 2020 WO
2020018704 Jan 2020 WO
2020091777 May 2020 WO
2020101922 May 2020 WO
2020112345 Jun 2020 WO
2021040934 Mar 2021 WO
2021118717 Jun 2021 WO
2021150465 Jul 2021 WO
2021211906 Oct 2021 WO
2022005607 Jan 2022 WO
2022082680 Apr 2022 WO
2022154850 Jul 2022 WO
2022159156 Jul 2022 WO
2022231668 Nov 2022 WO
2022235303 Nov 2022 WO
2022265681 Dec 2022 WO
2023009159 Feb 2023 WO
Non-Patent Literature Citations (64)
Entry
Non-Published Commonly Owned U.S. Appl. No. 17/827,972, filed May 30, 2022, 30 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/850,112, filed Jun. 27, 2022, 41 pages, Nicira, Inc.
Alsaeedi, Mohammed, et al., “Toward Adaptive and Scalable OpenFlow-SDN Flow Control: A Survey,” IEEE Access, Aug. 1, 2019, 34 pages, vol. 7, IEEE, retrieved from https://ieeexplore.ieee.org/document/8784036.
Del Piccolo, Valentin, et al., “A Survey of Network Isolation Solutions for Multi-Tenant Data Centers,” IEEE Communications Society, Apr. 20, 2016, vol. 18, No. 4, 37 pages, IEEE.
Fortz, Bernard, et al., “Internet Traffic Engineering by Optimizing OSPF Weights,” Proceedings IEEE INFOCOM 2000, Conference on Computer Communications, Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies, Mar. 26-30, 2000, 11 pages, IEEE, Tel Aviv, Israel, Israel.
Francois, Frederic, et al., “Optimizing Secure SDN-enabled Inter-Data Centre Overlay Networks through Cognitive Routing,” 2016 IEEE 24th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), Sep. 19-21, 2016, 10 pages, IEEE, London, UK.
Guo, Xiangyi, et al., (U.S. Appl. No. 62/925,193), filed Oct. 23, 2019, 26 pages.
Huang, Cancan, et al., “Modification of Q.SD-WAN,” Rapporteur Group Meeting—Doc, Study Period 2017-2020, Q4/11-DOC1 (190410), Study Group 11, Apr. 10, 2019, 19 pages, International Telecommunication Union, Geneva, Switzerland.
Lasserre, Marc, et al., “Framework for Data Center (DC) Network Virtualization,” RFC 7365, Oct. 2014, 26 pages, IETF.
Lin, Weidong, et al., “Using Path Label Routing in Wide Area Software-Defined Networks with Open Flow,” 2016 International Conference on Networking and Network Applications, Jul. 2016, 6 pages, IEEE.
Long, Feng, “Research and Application of Cloud Storage Technology in University Information Service,” Chinese Excellent Masters' Theses Full-text Database, Mar. 2013, 72 pages, China Academic Journals Electronic Publishing House, China.
Michael, Nithin, et al., “HALO: Hop-by-Hop Adaptive Link-State Optimal Routing,” IEEE/ACM Transactions on Networking, Dec. 2015, 14 pages, vol. 23, No. 6, IEEE.
Mishra, Mayank, et al., “Managing Network Reservation for Tenants in Oversubscribed Clouds,” 2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, Aug. 14-16, 2013, 10 pages, IEEE, San Francisco, CA, USA.
Mudigonda, Jayaram, et al., “NetLord: A Scalable Multi-Tenant Network Architecture for Virtualized Datacenters,” Proceedings of the ACM SIGCOMM 2011 Conference, Aug. 15-19, 2011, 12 pages, ACM, Toronto, Canada.
Non-Published Commonly Owned Related International Patent Application PCT/US2021/057794 with similar specification, filed Nov. 2, 2021, 49 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/103,614, filed Nov. 24, 2020, 38 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/143,092, filed Jan. 6, 2021, 42 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/143,094, filed Jan. 6, 2021, 42 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/194,038, filed Mar. 5, 2021, 35 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/227,016, filed Apr. 9, 2021, 37 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/227,044, filed Apr. 9, 2021, 37 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/351,327, filed Jun. 18, 2021, 48 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/351,333, filed Jun. 18, 2021, 47 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/351,340, filed Jun. 18, 2021, 48 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/351,342, filed Jun. 18, 2021, 47 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/351,345, filed Jun. 18, 2021, 48 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/384,735, filed Jul. 24, 2021, 62 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/384,736, filed Jul. 24, 2021, 63 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/384,737, filed Jul. 24, 2021, 63 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/384,738, filed Jul. 24, 2021, 62 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/510,862, filed Oct. 26, 2021, 46 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/517,639, filed Nov. 2, 2021, 46 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/542,413, filed Dec. 4, 2021, 173 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/562,890, filed Dec. 27, 2021, 36 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/572,583, filed Jan. 10, 2022, 33 pages, Nicira, Inc.
Non-Published Commonly Owned U.S. Appl. No. 15/803,964, filed Nov. 6, 2017, 15 pages, The Mode Group.
Noormohammadpour, Mohammad, et al., “DCRoute: Speeding up Inter-Datacenter Traffic Allocation while Guaranteeing Deadlines,” 2016 IEEE 23rd International Conference on High Performance Computing (HiPC), Dec. 19-22, 2016, 9 pages, IEEE, Hyderabad, India.
Ray, Saikat, et al., “Always Acyclic Distributed Path Computation,” University of Pennsylvania Department of Electrical and Systems Engineering Technical Report, May 2008, 16 pages, University of Pennsylvania ScholarlyCommons.
Sarhan, Soliman Abd Elmonsef, et al., “Data Inspection in SDN Network,” 2018 13th International Conference on Computer Engineering and Systems (ICCES), Dec. 18-19, 2018, 6 pages, IEEE, Cairo, Egypt.
Webb, Kevin C., et al., “Blender: Upgrading Tenant-Based Data Center Networking,” 2014 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), Oct. 20-21, 2014, 11 pages, IEEE, Marina del Rey, CA, USA.
Xie, Junfeng, et al., A Survey of Machine Learning Techniques Applied to Software Defined Networking (SDN): Research Issues and Challenges, IEEE Communications Surveys & Tutorials, Aug. 23, 2018, 38 pages, vol. 21, Issue 1, IEEE.
Yap, Kok-Kiong, et al., “Taking the Edge off with Espresso: Scale, Reliability and Programmability for Global Internet Peering,” SIGCOMM '17: Proceedings of the Conference of the ACM Special Interest Group on Data Communication, Aug. 21-25, 2017, 14 pages, Los Angeles, CA.
Alvizu, Rodolfo, et al., “SDN-Based Network Orchestration for New Dynamic Enterprise Networking Services,” 2017 19th International Conference on Transparent Optical Networks, Jul. 2-6, 2017, 4 pages, IEEE, Girona, Spain.
Barozet, Jean-Marc, “Cisco SD-WAN as a Managed Service,” BRKRST-2558, Jan. 27-31, 2020, 98 pages, Cisco, Barcelona, Spain, retrieved from https://www.ciscolive.com/c/dam/r/ciscolive/emea/docs/2020/pdf/BRKRST-2558.pdf.
Barozet, Jean-Marc, “Cisco SDWAN,” Deep Dive, Dec. 2017, 185 pages, Cisco, Retreived from https://www.coursehero.com/file/71671376/Cisco-SDWAN-Deep-Divepdf/.
Bertaux, Lionel, et al., “Software Defined Networking and Virtualization for Broadband Satellite Networks,” IEEE Communications Magazine, Mar. 18, 2015, 7 pages, vol. 53, IEEE, retrieved from https://ieeexplore.ieee.org/document/7060482.
Cox, Jacob H., et al., “Advancing Software-Defined Networks: A Survey,” IEEE Access, Oct. 12, 2017, 40 pages, vol. 5, IEEE, retrieved from https://ieeexplore.ieee.org/document/8066287.
Duan, Zhenhai, et al., “Service Overlay Networks: SLAs, QoS, and Bandwidth Provisioning,” IEEE/ACM Transactions on Networking, Dec. 2003, 14 pages, vol. 11, IEEE, New York, NY, USA.
Jivorasetkul, Supalerk, et al., “End-to-End Header Compression over Software-Defined Networks: a Low Latency Network Architecture,” 2012 Fourth International Conference on Intelligent Networking and Collaborative Systems, Sep. 19-21, 2012, 2 pages, IEEE, Bucharest, Romania.
Li, Shengru, et al., “Source Routing with Protocol-oblivious Forwarding (POF) to Enable Efficient e-Health Data Transfers,” 2016 IEEE International Conference on Communications (ICC), May 22-27, 2016, 6 pages, IEEE, Kuala Lumpur, Malaysia.
Ming, Gao, et al., “A Design of SD-WAN-Oriented Wide Area Network Access,” 2020 International Conference on Computer Communication and Network Security (CCNS), Aug. 21-23, 2020, 4 pages, IEEE, Xi'an, China.
PCT International Search Report and Written Opinion of commonly owned International Patent Application PCT/US2021/057794, dated Feb. 22, 2022, 14 pages, International Searching Authority (EPO).
Tootaghaj, Diman Zad, et al., “Homa: An Efficient Topology and Route Management Approach in SD-WAN Overlays,” IEEE INFOCOM 2020—IEEE Conference on Computer Communications, Jul. 6-9, 2020, 10 pp., IEEE, Toronto, ON, Canada.
Non-Published Commonly Owned U.S. Appl. No. 17/967,795, filed Oct. 17, 2022, 39 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 17/976,784, filed Oct. 29, 2022, 55 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 18/083,536, filed Dec. 18, 2022, 27 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 18/102,685, filed Jan. 28, 2023, 124 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 18/102,687, filed Jan. 28, 2023, 172 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 18/102,688, filed Jan. 28, 2023, 49 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 18/102,689, filed Jan. 28, 2023, 46 pages, VMware, Inc.
Taleb, Tarik, “D4.1 Mobile Network Cloud Component Design,” Mobile Cloud Networking, Nov. 8, 2013, 210 pages, MobileCloud Networking Consortium, retrieved from http://www.mobile-cloud-networking.eu/site/index.php?process=download&id=127&code=89d30565cd2ce087d3f8e95f9ad683066510a61f.
Valtulina, Luca, “Seamless Distributed Mobility Management (DMM) Solution in Cloud Based LTE Systems,” Master Thesis, Nov. 2013, 168 pages, University of Twente, retrieved from http://essay.utwente.nl/64411/1/Luca_Valtulina_MSc_Report_final.pdf.
Kurdaev, Gieorgi, et al., “Dynamic On-Demand Virtual Extensible LAN Tunnels via Software-Defined Wide rea Networks,” 2022 IEEE 12th Annual Computing and Communication Workshop and Conference, Jan. 26-29, 022, 6 pages, IEEE, Las Vegas, NV, USA.
Author Unknown, “VeloCloud Administration Guide: VMware SD-WAN by VeloCloud 3.3,” Month Unknown 2019, 366 pages, VMware, Inc., Palo Alto, CA, USA.
Related Publications (1)
Number Date Country
20220231950 A1 Jul 2022 US