Network-link method useful for a last-mile connectivity in an edge-gateway multipath system

Information

  • Patent Grant
  • 11212140
  • Patent Number
    11,212,140
  • Date Filed
    Friday, July 31, 2020
    3 years ago
  • Date Issued
    Tuesday, December 28, 2021
    2 years ago
Abstract
In one exemplary aspect, an edge-gateway multipath method includes the step of providing an edge device in a local network communicatively coupled with a cloud-computing service in a cloud-computing network. A set of wide area network (WAN) links connected to the edge device are automatically detected. The WAN links are automatically measured without the need for an external router. The edge device is communicatively coupled with a central configuration point in the cloud-computing network. The method further includes the step of downloading, from the central configuration point, an enterprise-specific configuration data into the edge device. The enterprise-specific configuration data includes the gateway information. The edge device is communicatively coupled with a gateway in the cloud-computing network. The communicatively coupling of the edge device with the gateway includes a multipath (MP) protocol.
Description
BACKGROUND

Several trends are altering the use of enterprise applications. For example, enterprises are moving to hosting applications in private and public clouds as opposed to enterprise data centers. Enterprises are also increasingly using applications provided by other companies, which are generically grouped under SaaS (Software-as-a-Service) and are not hosted in an enterprise-data center. In another example, enterprises are migrating from large Information Technology (IT) supported branches to smaller branches. These smaller branches can utilize remote IT management strategies.


These trends have combined to alter application's network paths and/or the quality of service (QoS) of these paths. With enterprise data-center applications, the large IT branches can lease multiprotocol label switching (MPLS) lines. MPLS can be mechanism in communications networks that directs data from one network node to the next node based on short path labels rather than long network addresses, thus avoiding complex lookups in a routing table. MPLS lines can be associated with a known level of QoS that provides a deterministic application access experience and/or application availability. Applications are moving to the cloud where they are deployed either in the public and/or hybrid cloud. Enterprise branches access these applications via the public Internet. Access to these applications in such cases may be hampered by the ‘best effort’ nature of access as opposed to having a known QoS level. Additionally, a smaller branch may also utilize computing devices that are relatively easy to deploy and/or remotely manage in the event no on-site IT staff is available.


BRIEF SUMMARY OF THE INVENTION

In one aspect, a network-link method useful for a last-mile connectivity in an edge-gateway multipath includes the step of identifying a network-traffic flow of a computer network using deep-packet inspection to determine an identity of an application type associated with the network-traffic flow. The network-link method includes the step of aggregating a bandwidth from a specified set of network links. The network-link method includes the step of intelligently load-balancing a traffic on the set of network links by sending successive packets belonging to a same traffic flow on a set of specified multiple-network links. The set of specified multiple-network links is selected based on the identity of an application type associated with the network-traffic flow. The network-link method includes the step of identifying a set of active-network links in the set of specified multiple-network links. The network-link method includes the step of providing an in-order data delivery with an application persistence by sending data packets belonging to a same data-packet flow on the set of active links. The network-link method includes the step of correcting an error on a lossy network link using an error-control mechanism for data transmission selectively based on the identified network-traffic flow nd a current measured condition in the computer network.





BRIEF DESCRIPTION OF THE DRAWINGS

The present application can be best understood by reference to the following description taken in conjunction with the accompanying figures, in which like parts may be referred to by like numerals.



FIG. 1 illustrates an example a programmable, multi-tenant overlay network, according to some embodiments.



FIG. 2 depicts a process of a network link used to replace ‘last mile’ connectivity, according to some embodiments.



FIG. 3 depicts a process of removing the requirement for an IT administrator to configure each individual device in an enterprise computing network, according to some embodiments.



FIG. 4 illustrates an example flow sequence diagram for an MP packet flow, according to some embodiments.



FIG. 5 illustrates an example MP process for bandwidth aggregation and data ordering, according to some embodiments.



FIG. 6 illustrates an example of metadata in an MP header, according to some embodiments.



FIG. 7 depicts an exemplary computing system that can be configured to perform any one of the processes provided herein.





The Figures described above are a representative set, and are not an exhaustive with respect to embodying the invention.


DETAILED DESCRIPTION

Disclosed are a network-link method and system useful for a last-mile connectivity in an edge-gateway multipath. Although the present embodiments have been described with reference to specific example embodiments, it can be evident that various modifications and changes may be made to these embodiments without departing from the broader spirit and scope of the particular example embodiment.


Reference throughout this specification to “one embodiment,” “an embodiment,” or similar language means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment of the present invention. Thus, appearances of the phrases “in one embodiment,” “in an embodiment,” and similar language throughout this specification may, but do not necessarily, all refer to the same embodiment.


Furthermore, the described features, structures, or characteristics of the invention may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided, such as examples of programming, software modules, attendee selections, network transactions, database queries, database structures, hardware modules, hardware circuits, hardware chips, etc., to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art can recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, materials, and so forth. In other instances, well-known structures, materials, or operations are not shown or described in detail to avoid obscuring aspects of the invention.


The schematic flow chart diagrams included herein are generally set forth as logical flow chart diagrams. As such, the depicted order and labelled steps are indicative of one embodiment of the presented method. Other steps and methods may be conceived that are equivalent in function, logic, or effect to one or more steps, or portions thereof, of the illustrated method. Additionally, the format and symbols employed are provided to explain the logical steps of the method and are understood not to limit the scope of the method. Although various arrow types and line types may be employed in the flow chart diagrams, they are understood not to limit the scope of the corresponding method. Indeed, some arrows or other connectors may be used to indicate only the logical flow of the method. For instance, an arrow may indicate a waiting or monitoring period of unspecified duration between enumerated steps of the depicted method. Additionally, the order in which a particular method occurs may or may not strictly adhere to the order of the corresponding steps shown.


Example Definitions

Automatic Repeat reQuest (ARQ) can be an error-control method for data transmission that uses acknowledgements (e.g. messages sent by the receiver indicating that it has correctly received a data frame or packet) and timeouts (e.g. specified periods of time allowed to elapse before an acknowledgment is to be received) to achieve reliable data transmission over an unreliable service. If the sender does not receive an acknowledgment before the timeout, it retransmits the frame/packet until the sender receives an acknowledgment or exceeds a predefined number of re-transmissions


Deep-packet inspection (DPI) can include a form of computer network packet filtering that examines the data part (and also the header in some embodiments) of a packet as it passes an inspection point.


Forward error correction (FEC) can be used for controlling errors in data transmission over unreliable or noisy communication channels. A sender can encode a message in a redundant way by using an error-correcting code (ECC). FEC codes can include block codes, convolutional codes, etc.


Last-mile connectivity can refer to the final leg of the telecommunications networks delivery components and mechanisms.


Lossy can refer to data compression in which unnecessary information is discarded.


Multiprotocol Label Switching (MPLS) can be a type of data-carrying technique for high-performance telecommunications networks that directs data from one network node to the next based on short path labels rather than long network addresses, avoiding complex lookups in a routing table. The labels can identify virtual links between distant nodes rather than endpoints. MPLS can encapsulate packets of various network protocols. MPLS can support a range of access technologies, including, inter alia: T-carrier (e.g. T1)/E-carrier (E1), Asynchronous Transfer Mode (ATM), Frame Relay, and Digital subscriber line (DSL).


Quality of service (QoS) can refer to the overall performance of a telephony or computer network, particularly the performance seen by the users of the network.


Software as a Service (SaaS) can be a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted.


Virtual Machine (VM) can be an emulationa particular computer system.


Link Characterization can refer to measuring the quality of a link which will include the latency (e.g. one-way packet delay), jitter (e.g. packet delay variation), loss (e.g. what percentage of packets are actually delivered at the receiving end) and available bandwidth.


Measured Condition in a computer network can refer to the characterization of one or more links that are connected to an edge device.


Error Control Mechanism can refer to the remedial action taken by an edge device or a gateway device to overcome the side effects of a non-perfect link. These mechanisms can be used to overcome jitter and loss experienced in any one link and include forward error correction (FEC), duplication of packets (e.g. if multiple links are available). This can also include an implementation of a jitter buffer which can minimize the effects of the packet delay variation.


Example Methods and Systems


FIG. 1 illustrates an example of a programmable, multi-tenant overlay network 100, according to some embodiments. An overlay network can be a computer network built on the top of another network. Overlay network 100 can include a distributed system such as a cloud-computing network (e.g. public cloud 102). Public cloud 102 can include a cloud-computing network. In some embodiments, public cloud 102 can be implemented, in whole or in part, as private cloud-computing network (e.g. a proprietary network or datacenter that uses cloud computing technologies). In other embodiments, the public cloud 102 can include SaaS companies 109 which provide applications to enterprises and end-consumers. As used herein, a cloud-computing network can include a computer network(s) that utilizes a variety of different computing concepts that involve a large number of computers connected through a real-time communication network (e.g. the Internet). A public cloud can include a set of computers and computer network resources based on the standard cloud-computing model, in which a service provider makes resources, such as applications and storage, available to the general public over the Internet. Applications, storage, and other resources can be made available by a service provider.


Public cloud 102 can include orchestrator 104 (e.g. a Velocloud® orchestrator). Orchestrator 104 can enable configuration and monitoring of the network from any location with Internet access. Orchestrator 104 can be a central controller for configuring and monitoring a multi-tenant instance of the overlay network described by a unique ‘network ID’. Each such instance can have a set of tenant(s) that have tenant specific policies for sharing resources, access control and configuration. A tenant can then have a ‘tenant-id’ which is used to identify tenants in the network. Multiple independent instances of networks can exist so as to enable self-operated overlay networks similar to the public network.


In this context, an orchestrator 104 can perform various functions such as configuration and monitoring. Orchestrator 104 can enable role based configuration and management. The following can be examples of roles. An ‘end-user’ (e.g. maps to an access device like laptop or mobile device) that connects to an edge device 108 that can be enabled to configure and/or monitor resources and policies that are specific to a user. A ‘tenant administrator’ can be a tenant administrator. A tenant administrator can configure tenant-wide policy and by extension policies for all the users in the tenancy. An ‘operator’ that can operate the overlay network by provisioning gateway(s) 106, edge device(s) 108 and/or other resources for the network (e.g. may not be able to view or modify tenant policies) can be provided.


In addition to this, the orchestrator 104 can also enable ‘authenticated partners’ to modify the behavior of the network (e.g. application service providers who want to reserve extra bandwidth for some applications sessions etc.) via published application program interface (APIs).


Public cloud 102 can include gateway(s) 106. A gateway can be a network node equipped for interfacing with another network utilizing different communication protocols. Gateway(s) 106 can be deployed in a public cloud (e.g. as shown in FIG. 1), a private cloud, Internet service provider (ISP) peering points and/or application service peering points that serve as aggregation points for multiple edges. Gateway(s) 106 can be located at peering points in public cloud 102.


Edge device 108 can provide entry points into enterprise and/or service-provider core networks. Example edge devices can include routers, routing switches, integrated access devices (IADs), multiplexers, and a variety of metropolitan area network (MAN) and wide area network (WAN) access devices. Edge device 108 can be deployed inline in one of several modes. In one example, edge device 108 can be deployed as a customer premises equipment (CPE) device in a branch that is capable of serving as a router. In one example, edge device 108 can be deployed as a powered mobile device that can be attached to end-user devices (e.g. laptops, desktops, wearable computers, tablet computers and the like via universal serial bus (USB)). In some examples, edge device 108 can include device software that directly interacts with a host-device operating system. In one example, the edge device 108 may be a virtual machine. A virtual machine can be a software-based emulation of a computer. In some examples, edge device 108 and the gateway(s) 106 can straddle the ‘bottleneck’ section of a communication network (e.g. the ‘last-mile’—a final leg of a communication networks delivering communications connectivity to a network host such as an enterprise computing system). In some embodiments, edge device 108 can be characterized as ‘zero touch’ (e.g. no configuration explicitly required at client side). Accordingly, edge device can automatically detect an available wide area network (WAN) and locate orchestrator 104.


The edge device 108 sends network packets. Network packets may be control packets, data packets or management packets. Control packets or control traffic is used to sense the quality of the path, link characteristics, clock synchronization etc. This is also known as the control plane. Data packets or data traffic are packets can be sent from the client and/or source computer to the application server running in the enterprise data center or private or public cloud 102. This is also known as the data plane. Management packets or management traffic are packets can be sent from the edge 108 or gateway 106 to the orchestrator 104 and includes heartbeat messages or flow statistics etc. This is also known as the management plane. In one example, both the control plane and the data plane can pass through the gateway 106. In some examples, only the control traffic may be sent to the gateway 106 and the data plane may bypass the gateway 106 and go directly from the edge 108 to the application server.



FIG. 2 depicts a process 200 of a network link used to replace a ‘last mile’ connectivity (e.g. last MPLS, T1 etc.), according to some embodiments. The network links can be multiple consumer grade broadband links, private links (MPLS, etc.), WiFi networks or 3g/4g mobile links with the ability to perform process 200. In step 202 of process 200, network traffic can be identified using deep-packet inspection to determine the application and/or application type of the traffic. Appropriate measures can be applied to ensure the QoS of the specific traffic based on the application, application type (e.g. real-time, transactional, bulk) and/or business priority of the traffic. For example, if the network traffic is identified as voice traffic which is a high business priority, then forward-error correction can be performed to reduce or eliminate packet loss. In another example, the network traffic can be identified as a bulk-file transfer. In this example, the file-transfer network traffic can be set as the lowest-priority traffic and can use a small portion of bandwidth under contention or more bandwidth if no other traffic is in the network. Traffic identified as ‘regular web browsing’ (such as Facebook® and YouTube®) can be dropped out of the network altogether and sent over the regular Internet as it is not business critical. In step 204, bandwidth can be aggregated from all the links (e.g. a link can be a communications channel that connects two or more communicating devices). For example, bandwidth can be aggregated with a multipath transport layer protocol capable of ‘striping’ a traffic flow (e.g. flow of data packets from a source to a destination) across multiple paths between two peers (e.g. edge 108 and/or gateway 106). Traffic flow can be ‘striped’ across the multiple paths in one peer and ‘gathered’ at the other peer. In step 206, traffic on the links can be intelligently load balanced by sending successive packets belonging to the same flow (e.g. a traffic flow) on multiple links selected by an application aware intelligent link characterization and/or link selection it is noted that the selected QoS based on the application can inform the selected links (e.g. whether to bind traffic to the best link, load balance or replicate traffic, etc.). The selected QoS can also determine whether the application is sensitive to loss and/or jitter. Based on the levels of loss and jitter in the network and the sensitivity of the traffic to them, a mitigation mechanism is put into play. In step 208, outages can be prevented using reliable, self-correcting data transfer to ensure in-order data delivery with the ability to maintain application persistence, as long as there is at least one active link, by sending packets belonging to the same flow on the active link(s). In step 210 errors on lossy links can be corrected using an error control mechanism for data transmission (e.g. Automatic Repeat-reQuest (ARQ) and/or forward error correction (FEC)) selectively based on the traffic identified and the current measured, conditions in the network.



FIG. 3 depicts a process 300 of removing the requirement for an IT administrator to configure each individual device in an enterprise computing network, according to some embodiments. In step 302, WAN links that are connected directly to the edge device can be detected and measured without the need for an external router. In step 304, a central configuration point in the cloud can be connected to. Enterprise-specific configuration data, including available gateways, can be downloaded. In step 306, the available gateway(s) can be connected to by the entity in the cloud with the enterprise-specific configuration data. In step 308, an available bandwidth on each path can be measured.


A multipath (MP) protocol can implemented by combining multiple network paths into a composite connection that multiplexes packets from MP packet flows and control information (path quality, link characteristics, clock synchronization, etc.). An MP packet flow can map to a pair of internet protocol (IP) flows (e.g. one flow in each direction such as forward and reverse and between two endpoints). The MP packet flow can be identified by a set of parameters that describe a pair of IP flows (e.g. five (5)-tuple (the reverse path is described with source and destination swapped), namely: source IP address, destination IP address, source port, destination port, and the network layer three (3) protocol identifier. In some examples, a multipath routing can include a routing technique of using multiple alternative paths through a network.


In effect every network layer four (4) flow (e.g. a pair of layer three (3) flows) (e.g. an application flow) can have a one-to-one mapping with the MP packet flow. In some embodiments, an, application flow and MP packet flow refer to the same notion. Each MP packet flow can be assigned an MP packet-flow identifier. The MP packet-flow identifier can be unique to the set of MP peers (e.g. peer one (1) and/or peer two (2) of FIG. 4). An MP node can aggregate connections from multiple MP peers that are sending MP packet flows to it (e.g. gateway(s) 106). The MP node can aggregate flow identifiers generated b the non-aggregation peer (e.g. edge device 108) that may not be unique.


In one example, at the time of first connection between two MP stacks, an MP_INITIATE message can be passed which assigns a unique identifier that is used by the non-aggregation peer to ensure the flow identifier is unique at the aggregation point (see FIG. 4). In one example, this operation can be implemented to an identifier specific to the particular network of the non-aggregation peer.



FIG. 4 illustrates an example flow sequence 400 diagram for an MP packet flow, according to some embodiments. In one embodiment, peer one (1) can be a client-side edge device and peer two (2) can be a cloud-based gateway device. Peer one (1) can transmit an MP_INITIATE 402 to peer two (2). MP_CONTROL 408 (e.g. control information such as QoS parameters, treatment of data traffic flow parameters, etc.) can be exchanged between peer one (1) and peer two (2). Data packets can then be exchanged (e.g. MP_DATA 404 and MP_DATA 412). Data packets can include any user data. These data packets can be sequenced, numbered and/or sent across multiple links. When sent across multiple links, redundant copies of the packets can be purged on receipt. Data packets can be acknowledged on return. Additional control data (e.g. MP_CONTROL) can be exchanged. MP_FIN 406 can initiate closing the MP packet flow session by peer one (1). Peer two (2) can provide MP_FIN+ACK 412 to acknowledge MP_FIN 406 can terminate session.



FIG. 5 illustrates an example MP process 500 for bandwidth aggregation and data ordering, according to some embodiments. Process 500 can be used for intersession load balancing. In some embodiments, the MP stack can achieve bandwidth aggregation by sending successive packets belonging to the same MP packet flow on the different paths, to a peer MP stack. In some examples, different paths can be established on different links (though this is not a limiting condition). For example, data packets 502 can be an application flow. Data packets 502 can be striped with a MP stripe 504 in one device (e.g. edge device 108).


The endpoints (e.g. client and the application server) can infer this as an aggregated throughput as more packets are delivered to the endpoints in a given time window when compared to non-multipath case. MP process 500 can deliver ordered data 505 and 508 between two MP peers even if the data is sent on different paths between the peers. For example, successive data packets belonging to the same flow can be sent on different links with additional metadata. The metadata can identify data packet absolute offsets in the original flow. This metadata can be used to re-arrange the data back in order when the underlying application requires in-order data. In some applications (e.g. real-time collaboration applications) this re-ordering may introduce latencies that may be unacceptable. In these instances, data packets can be delivered in the same order of arrival. The application can handle ordering of data packets. This application awareness can be in the transport layer. This presence can be implemented on both sides of the network and enable interpretation of metadata and reassemble the data. This functionality can be selectively turned on/off based on detecting an application's particular requirements on receiving the ordered data 506 and 508. Additional headers, shown below, marked with MP headers 510 and 514 (e.g. “Vn’) can be added. MP headers 510 and 514 can describe the data ordering along with other metadata (e.g. such as MP packet flow identifier, timestamps).



FIG. 6 illustrates an example of metadata in an MP header, according to some embodiments. This metadata can enable the peer MP stack to receive the MP packet flows 506 and 508 (including striped data packets 512 and 516) from different paths in their order of arrival and re-arrange them in order to re-create the original flow of data packets 502 as data packets 520.


In one example, a Global Data Sequence Number (GDSN) can be the byte offset of the data with respect to the original application flow (e.g. data packets 502). GDSN can be used to reorder the data. Each MP packet can have the GDSN which is used by the peer MP stack to re-order the MP packet flow in an original order. Additionally, each peer can transmit the last seen GDSN on its stack for a given MP packet flow ‘piggybacked’ on an MP data packet. This last seen GDSN can be used to purge queues and re-transmit a missing GDSN. In the case the data transfer is half-duplex, then a MP_ACK message can be explicitly used to transmit the last seen GDSN to the other peer.


In the context of FIG. 4, during the closing of the MP packet flow 400, the MP_FIN 406 can be set by the peer that initiates the closing of the MP packet flow 400. The GDSN in this packet can be used by the other peer to acknowledge teardown via the MP_FIN+ACK 414 with a GDSN of zero (0).


An example method of traffic identification is now provided. An MP system can utilize an external deep-packet inspection engine (and/or form of computer network packet filtering) to identify the application and application type of a given flow. This information can be used to determine the optimal MP packet flow settings to ensure the MP packet flow's QoS parameter. In cases where the application cannot be identified, an MP system can monitor the behavior of MP packet flows over time and attempt to derive the optimal settings for QoS. Future flows can be mapped to these new settings using IP address, port number, protocol, TOS/DSCP tag and/or destination hostname as the MP system learns optimal MP traffic parameters. Additionally, these settings which were obtained through this slow learning method (e.g. can include machine-learning methodologies such as neural networks, supervised learning, unsupervised learning, clustering, structured prediction, decision tree learning, reinforcement learning and the like) can be shared to all other edges in the network via the orchestrator 104 which can allow learning to be spread across an enterprise or the entire network of connected edges.


Deep-packet inspection (DPI) can include examining, the data part (and/or also the packet header, etc.) of a packet as it passes an inspection point, searching for protocol non-compliance, viruses, spam, intrusions, or defined criteria to decide whether the packet may pass or if it needs to be routed to a different destination, or, for the purpose of collecting statistical information. DPI can be performed with a DPI engine (e.g. Qosmos®, DPI engine, Dell™ SonicWALL™ Reassembly-Free Deep-Packet Inspection™ (RFDPI) engine, etc.) and/or other packet analyser.


An example of path characterization and selection is now provided. An MP protocol can implicitly implement communicating an MP packet flow on multiple paths (e.g. on a set of underlying links). Consequently, an active path characterization that periodically measures the health of a path can be implemented. A cost function that computes a cost for each path based on the parameters measured in the health check can be implemented. A selection algorithm can be implemented that applies a set of constraints and chooses the path based on the path cost and the transmit algorithm determined.


An example of active path characterization is now provided. As a part of link characterization, the latency (e.g. one-way packet delay), jitter (e.g. packet delay variation), loss and available bandwidth on the path can be measured. To measure latency between two MP peers on a given path, a clock synchronization operation can be implemented in the MP peers. An example time synchronization protocol is now provided. Timestamp measurements can be sent continuously to whichever device is performing the role of master clock. The lowest difference in timestamps from a set of measurements can be used as a measure of the offset between the clocks. Backward time shifts which could influence measurements and computation can be avoided. The drift rate can be measured by observing the change in offset over time. Based on this drift rate, the interval between continuous measurements can be calculated to ensure that clocks will remain synchronized over time. Once the docks are synchronized, the one-way receive latency and jitter can then be measured by sending a timestamped packet train.


In one example, multipath transport can handle and/or prevent congestion issues when the network paths are sufficiently diverse from a network topology standpoint. In these cases, the overall load on the individual paths can be reduced. On the other hand, diverse network paths can have diverse characteristics in terms of latency, throughput, loss and/or jitter. The load-balancing algorithm can send packets are on a ‘best possible’ link until the point the link is oversubscribed and/or there is loss on the link before switching to another path. When the network includes a wireline backbone (cable, DSL etc.), alternate paths can be utilized when available. On the other hand, with respect to networks with a wireless backbone (e.g. mobile, WiFi, WiMax, etc.), a packet drop may be an ‘ephemeral’ event that is short lived with relatively quicker recovery. In such a case, it may not be prudent to switch to alternate paths or clamp down the rate for this event without consideration of various other metrics. Thus, other metrics in addition to a loss value can be utilized. For example, a combination of parameters can be utilized, including, inter alia: the ECN flag (e.g. explicit congestion notification) set by an upstream router in an IP layer, a rate of acknowledgements received, a rate of loss in an interval of time to estimate the lossy value of a link, etc.


In one example, the cost of a MP path can be computed as the e taken for a data packet to reach from one peer to another peer inclusive of such factors as scheduling and/or MP processing overheads. It can be computed as the sum of the jitter, latency and processing delays. The path with the least cost with respect to, a given a set of constraints (e.g. link level policies, application specific path usage policies etc.) can be, selected accordingly.


An example MP path selection method is now provided. Based on the application and/or the current measured network conditions, the MP path(s) can be treated in various ways. In one example, the MP path can be load balanced such that each data packet selects the path with the lowest path cost and is transmitted on that path. In another example, the MP path can fixed such that the first packet selects the best path available. This path can be used as long it is available. If an active path characterization operation determines that the path is no longer available, a path selection operation can be performed again and the MP packet flow can migrate to a next best path. In yet another example, the MP path can be replicated across n-number paths based on such parameters as, inter alia: the importance of the application, bandwidth required and/or expected potential for packet loss in the network.


In one example, QoS can be ensured for an application by utilizing a combination of path selection methods such as those provided supra, as well as, network scheduling, packet reordering and/or error correction. For example, when an MP packet flow is initiated, an edge device (e.g. edge device 108) can identify the application and determine the proper QoS methods to be applied for this type of flow. The methods may or may not be symmetric (e.g. the same for the sender and receiver). Once the edge device determines the methods to be used, a control message can be sent to the gateway to ensure that the gateway (e.g. gateway(s) 106) in turn has information as to how to treat the MP packet flow (e.g. without having to do its own application identification). In the event the MP system (e.g. based on network conditions) and/or an administrator indicates that the methods should be changed, the edge device can again signal the gateway with a control message. The methods can be updated without interruption to the service or traffic. For example, upon receipt of the control message from the edge, the gateway can update the QoS methods of the flow without deleting the existing flow. As a result, the next packet to be sent can use the updated scheduling policies and link selection methods that were transmitted without interruption. For example, a MP packet flow that is being load balanced and is changed to replication as loss increases in the network can load balance packets 1-n until the control message is received. Packet flow can be a sequence of packets from a source computer to a destination, which may be another host, a multicast group, or a broadcast domain. Accordingly, packets beginning with n+1 can begin to be replicated.


The gateway can be a multi-tenant gateway wherein multiple customers with edge devices can connect to the same gateway without actually exposing any of their data to each other. The multi-tenant gateway can implement a two-level hierarchical scheduler. In this case, a total egress bandwidth to the edge can be equally shared (e.g. in a work conserving manner) between all the connected edges at the top level (e.g. root level). The second level (e.g. a leaf) can schedule the MP packet flows belonging to a particular edge device rather than have resource limits defined for that edge device by the top level. To ensure that the scheduler does not hit processing limits for scheduling flows, the leaf level scheduler may not have per flow queues. Instead, a multiplexing algorithm can be utilized. The multiplexing algorithm can map a set of flows characterized by a set of parameters to a set of queues such that there is a many to one mapping between flows and queues.


In one example, SaaS applications may also directly interact with the software in the edge device (e.g. edge device 108 in FIG. 1) or gateway(s) (e.g. gateway 106 in FIG. 1). This can be done, for example, to query the health of the last-mile and to provision network bandwidth and characteristics in the last-mile to ensure QoS for the application. The edge device and/or the gateway(s) regardless of their embodiments provide APIs (application programming interfaces) that a SaaS application, with suitable permissions, can use to determine how the last-mile from the edge device to the gateway (e.g. in both directions) is performing. With this information the SaaS application may throttle back bandwidth so that the application continues to operate without congesting the network further and yet function reasonably well. By default, when an end-user accesses an application (which may run anywhere in the public cloud), the edge device identifies the application and determines the proper QoS methods to apply for this type of flow. As noted in supra, this includes network scheduling, packet reordering and/or error correction which is determined by policies set in the Orchestrator (e.g. orchestrator 104 in FIG. 1). The SaaS application may modify these policies dynamically to ensure that the end-user gets the best experience possible given the current last-mile characteristics.



FIG. 7 depicts an exemplary computing system 700 that can be configured to perform any one of the processes provided herein. In this context, computing system 700 may include, for example, a processor, memory, storage, and I/O devices (e.g., monitor, keyboard, disk drive, Internet connection, etc.). However, computing system 700 may include circuitry or other specialized hardware for carrying out some or all aspects of the processes. In some operational settings, computing system 700 may be configured as a system that includes one or more units, each of which is configured to carry out some aspects of the processes either in software, hardware, or some combination thereof.



FIG. 7 depicts computing system 700 with a number of components that may be used to perform any of the processes described herein. The main system 702 includes a mother-board 704 having an I/O section 705, one or more central processing units (CPU) 708, and a memory section 710, which may have a flash memory card 712 related to it. The I/O section 705 can be connected to a display 714, a keyboard and/or other user input (not shown), a disk storage unit 716, and a media drive unit 718. The media drive unit 718 can read/write a computer-readable medium 720, which can include programs 722 and/or data.


B. CONCLUSION

Although the present embodiments have been described with reference to specific example embodiments, various modifications and changes can be made to these embodiments without departing from the broader spirit and scope of the various embodiments. For example, the various devices, modules, etc. described herein can be enabled and operated using hardware circuitry, firmware, software or any combination of hardware, firmware, and software (e.g., embodied in a machine-readable medium).


In addition, it can be appreciated that the various operations, processes, and methods disclosed herein can be embodied in a machine-readable medium and/or a machine accessible medium compatible with a data processing system (e.g., a computer system), and can be performed in any order (e.g., including using means for achieving the various operations). Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense. In some embodiments, the machine-readable medium can be a non-transitory form of machine-readable medium.

Claims
  • 1. A method for directing packet flows in a software defined wide area network (SD-WAN), the method comprising: at an edge device connected to a gateway via a plurality of network links between the edge device and the gateway, wherein the gateway is at a different location than the edge device: receiving a packet flow;performing deep packet inspection (DPI) on the received packet flow to identify an application type associated with the received packet flow; andbased on the identified application type, selecting a network link from the plurality of network links between the edge device and the gateway for forwarding the packet flow to the gateway.
  • 2. The method of claim 1, wherein the gateway is in a public cloud.
  • 3. The method of claim 1, wherein the gateway is in a datacenter.
  • 4. The method of claim 1, wherein the edge device is at a premises of an enterprise, and the plurality of network links connect the premises to the gateway at the different location.
  • 5. The method of claim 1, wherein selecting the network link comprises selecting, based on the identified application type, two or more network links from the plurality of network links to forward the packets of the flow to the gateway.
  • 6. The method of claim 5, wherein selecting the two or more network links comprises: assessing bandwidth on the plurality of network links; andload balancing traffic on the two or more network links to distribute the packets belonging to the flow among the two or more network links.
  • 7. The method of claim 6, wherein load balancing traffic comprises assessing a quality of a network link based on the identified application type.
  • 8. The method of claim 7, wherein assessing the quality of the network link comprises measuring a quality of the network link based on at least one of latency, jitter, loss, and available bandwidth.
  • 9. The method of claim 5, wherein selecting the two or more network links comprises identifying a set of two or more active network links.
  • 10. The method of claim 1, wherein the identified application type comprises an identify of a specific application associated with the network-traffic flow.
  • 11. The method of claim 1, wherein the identified application type comprises a bulk-file transfer application type, wherein bulk-file transfer network traffic is set as a lowest priority traffic and uses a small portion of network bandwidth.
  • 12. The method of claim 1, wherein when the identified application type comprises a social-network website browsing application type, the network-traffic flow is switched to an internet connection.
  • 13. The method of claim 1, wherein the identified application type comprises a voice application type, and wherein the voice application type comprises a high priority traffic type, the method further comprising performing a forward-error correction to reduce packet loss.
  • 14. A non-transitory machine readable medium storing a program for execution by a set of processing units, the program for directing packet flows along a plurality of network links defined between an edge device and a gateway device of a software defined wide area network (SD-WAN), the program comprising sets of instructions for: at an edge device connected to a gateway that is at a different location than the edge device: receiving a packet flow;performing deep packet inspection (DPI) on the received packet flow to identify an application type associated with the received packet flow; andbased on the identified application type, selecting a network link from the plurality of network links for forwarding the packet flow to the gateway.
  • 15. The non-transitory machine readable medium of claim 14, wherein the gateway is in one of a public cloud and a datacenter.
  • 16. The non-transitory machine readable medium of claim 14, wherein the edge device is at a premises of an enterprise, and the plurality of network links connect the premises to the gateway at the different location.
  • 17. The non-transitory machine readable medium of claim 14, wherein the set of instructions for selecting the network link comprises a set of instructions for selecting, based on the identified application type, two or more network links from the plurality of network links to forward the packets of the flow to the gateway.
  • 18. The non-transitory machine readable medium of claim 17, wherein the set of instruction for selecting the two or more network links comprises sets of instructions for: assessing bandwidth on the plurality of network links; andload balancing traffic on the two or more network links to distribute the packets belonging to the flow among the two or more network links.
  • 19. The non-transitory machine readable medium of claim 18, wherein the set of instructions for load balancing traffic comprises a set of instructions for assessing a quality of a network link based on the identified application type by measuring a quality of the network link based on at least one of latency, jitter, loss, and available bandwidth.
  • 20. The non-transitory machine readable medium of claim 14, wherein the identified application type comprises an identity of a specific application associated with the network-traffic flow.
CLAIM OF BENEFIT TO PRIOR APPLICATIONS

This application is a continuation application of U.S. patent application Ser. No. 15/221,608, filed Jul. 28, 2016, now published as U.S. Patent Publication 2017/0134186. U.S. patent application Ser. No. 15/221,608 is a continuation-in-part of U.S. patent application Ser. No. 14/321,818, filed on Jul. 2, 2014, now issued as U.S. Pat. No. 9,722,815. U.S. patent application Ser. No. 14/321,818 claims priority to U.S. Provisional Patent Application 61/844,822, filed on Jul. 10, 2013. U.S. patent application Ser. No. 15/221,608, now published as U.S. Patent Publication 2017/0134186, U.S. patent application Ser. No. 14/321,818, now issued as U.S. Pat. No. 9,722,815, and U.S. Provisional Patent Application 61/844,822 are hereby incorporated by reference in their entirety.

US Referenced Citations (555)
Number Name Date Kind
5652751 Sharony Jul 1997 A
5909553 Campbell et al. Jun 1999 A
6154465 Pickett Nov 2000 A
6157648 Voit et al. Dec 2000 A
6201810 Masuda et al. Mar 2001 B1
6363378 Conklin et al. Mar 2002 B1
6445682 Weitz Sep 2002 B1
6744775 Beshai et al. Jun 2004 B1
6976087 Westfall et al. Dec 2005 B1
7003481 Banka et al. Feb 2006 B2
7280476 Anderson Oct 2007 B2
7313629 Nucci et al. Dec 2007 B1
7320017 Kurapati et al. Jan 2008 B1
7581022 Griffin et al. Aug 2009 B1
7680925 Sathyanarayana et al. Mar 2010 B2
7681236 Tamura et al. Mar 2010 B2
7962458 Holenstein et al. Jun 2011 B2
8094575 Vadlakonda et al. Jan 2012 B1
8094659 Arad Jan 2012 B1
8111692 Ray Feb 2012 B2
8141156 Mao et al. Mar 2012 B1
8224971 Miller et al. Jul 2012 B1
8228928 Parandekar et al. Jul 2012 B2
8243589 Trost et al. Aug 2012 B1
8259566 Chen et al. Sep 2012 B2
8274891 Averi et al. Sep 2012 B2
8301749 Finklestein et al. Oct 2012 B1
8385227 Downey Feb 2013 B1
8566452 Goodwin et al. Oct 2013 B1
8630291 Shaffer et al. Jan 2014 B2
8661295 Khanna et al. Feb 2014 B1
8724456 Hong et al. May 2014 B1
8724503 Johnsson et al. May 2014 B2
8745177 Kazerani et al. Jun 2014 B1
8799504 Capone et al. Aug 2014 B2
8804745 Sinn Aug 2014 B1
8806482 Nagargadde et al. Aug 2014 B1
8856339 Mestery et al. Oct 2014 B2
8964548 Keralapura et al. Feb 2015 B1
8989199 Sella et al. Mar 2015 B1
9009217 Nagargadde et al. Apr 2015 B1
9055000 Ghosh et al. Jun 2015 B1
9060025 Xu Jun 2015 B2
9071607 Twitchell, Jr. Jun 2015 B2
9075771 Gawali et al. Jul 2015 B1
9135037 Petrescu-Prahova et al. Sep 2015 B1
9137334 Zhou Sep 2015 B2
9154327 Marino et al. Oct 2015 B1
9306949 Richard et al. Apr 2016 B1
9323561 Ayala et al. Apr 2016 B2
9336040 Dong et al. May 2016 B2
9354983 Yenamandra et al. May 2016 B1
9356943 Lopilato et al. May 2016 B1
9379981 Zhou et al. Jun 2016 B1
9413724 Xu Aug 2016 B2
9419878 Hsiao et al. Aug 2016 B2
9432245 Sorenson et al. Aug 2016 B1
9438566 Zhang et al. Sep 2016 B2
9450817 Bahadur et al. Sep 2016 B1
9450852 Chen et al. Sep 2016 B1
9462010 Stevenson Oct 2016 B1
9467478 Khan et al. Oct 2016 B1
9485163 Fries et al. Nov 2016 B1
9521067 Michael et al. Dec 2016 B2
9525564 Lee Dec 2016 B2
9559951 Sajassi et al. Jan 2017 B1
9563423 Pittman Feb 2017 B1
9602389 Maveli et al. Mar 2017 B1
9608917 Anderson et al. Mar 2017 B1
9608962 Chang Mar 2017 B1
9621460 Mehta et al. Apr 2017 B2
9641551 Kariyanahalli May 2017 B1
9665432 Kruse et al. May 2017 B2
9686127 Ramachandran et al. Jun 2017 B2
9715401 Devine et al. Jul 2017 B2
9717021 Hughes et al. Jul 2017 B2
9722815 Mukundan et al. Aug 2017 B2
9747249 Cherian et al. Aug 2017 B2
9755965 Yadav et al. Sep 2017 B1
9787559 Schroeder Oct 2017 B1
9807004 Koley et al. Oct 2017 B2
9819565 Djukic et al. Nov 2017 B2
9825822 Holland Nov 2017 B1
9825911 Brandwine Nov 2017 B1
9825992 Xu Nov 2017 B2
9832128 Ashner et al. Nov 2017 B1
9832205 Santhi et al. Nov 2017 B2
9875355 Williams Jan 2018 B1
9906401 Rao Feb 2018 B1
9930011 Clemons, Jr. et al. Mar 2018 B1
9935829 Miller et al. Apr 2018 B1
9942787 Tillotson Apr 2018 B1
10038601 Becker et al. Jul 2018 B1
10057183 Salle et al. Aug 2018 B2
10057294 Xu Aug 2018 B2
10135789 Mayya et al. Nov 2018 B2
10142226 Wu et al. Nov 2018 B1
10178032 Freitas Jan 2019 B1
10187289 Chen et al. Jan 2019 B1
10229017 Zou et al. Mar 2019 B1
10237123 Dubey et al. Mar 2019 B2
10250498 Bales et al. Apr 2019 B1
10263832 Ghosh Apr 2019 B1
10320664 Nainar et al. Jun 2019 B2
10320691 Matthews et al. Jun 2019 B1
10326830 Singh Jun 2019 B1
10348767 Lee et al. Jul 2019 B1
10355989 Panchal et al. Jul 2019 B1
10425382 Mayya et al. Sep 2019 B2
10454708 Mibu Oct 2019 B2
10454714 Mayya et al. Oct 2019 B2
10498652 Mayya et al. Dec 2019 B2
10511546 Singarayan et al. Dec 2019 B2
10523539 Mayya et al. Dec 2019 B2
10554538 Spohn et al. Feb 2020 B2
10560431 Chen et al. Feb 2020 B1
10565464 Han et al. Feb 2020 B2
10567519 Mukhopadhyaya et al. Feb 2020 B1
10574528 Mayya et al. Feb 2020 B2
10594516 Cidon et al. Mar 2020 B2
10594659 El-Moussa et al. Mar 2020 B2
10608844 Cidon et al. Mar 2020 B2
10637889 Ermagan et al. Apr 2020 B2
10666460 Cidon et al. May 2020 B2
10686625 Cidon et al. Jun 2020 B2
10749711 Mukundan et al. Aug 2020 B2
10778466 Cidon et al. Sep 2020 B2
10778528 Mayya et al. Sep 2020 B2
10805114 Cidon et al. Oct 2020 B2
10805272 Mayya et al. Oct 2020 B2
10826775 Moreno et al. Nov 2020 B1
10841131 Cidon et al. Nov 2020 B2
10938693 Mayya et al. Mar 2021 B2
10958479 Cidon et al. Mar 2021 B2
10959098 Cidon et al. Mar 2021 B2
20020087716 Mustafa Jul 2002 A1
20020198840 Banka et al. Dec 2002 A1
20030088697 Matsuhira May 2003 A1
20030112766 Riedel et al. Jun 2003 A1
20030112808 Solomon Jun 2003 A1
20030126468 Markham Jul 2003 A1
20030161313 Jinmei et al. Aug 2003 A1
20030189919 Gupta et al. Oct 2003 A1
20030202506 Perkins et al. Oct 2003 A1
20030219030 Gubbi Nov 2003 A1
20040059831 Chu et al. Mar 2004 A1
20040068668 Lor et al. Apr 2004 A1
20040165601 Liu et al. Aug 2004 A1
20040224771 Chen et al. Nov 2004 A1
20050078690 DeLangis Apr 2005 A1
20050154790 Nagata et al. Jul 2005 A1
20050172161 Cruz et al. Aug 2005 A1
20050265255 Kodialam et al. Dec 2005 A1
20060002291 Alicherry et al. Jan 2006 A1
20060114838 Mandavilli et al. Jun 2006 A1
20060171365 Borella Aug 2006 A1
20060182034 Klinker et al. Aug 2006 A1
20060182035 Vasseur Aug 2006 A1
20060193247 Naseh et al. Aug 2006 A1
20060193252 Naseh et al. Aug 2006 A1
20070064604 Chen et al. Mar 2007 A1
20070064702 Bates et al. Mar 2007 A1
20070091794 Filsfils et al. Apr 2007 A1
20070115812 Hughes May 2007 A1
20070121486 Guichard et al. May 2007 A1
20070130325 Lesser Jun 2007 A1
20070177511 Das et al. Aug 2007 A1
20070237081 Kodialam et al. Oct 2007 A1
20070260746 Mirtorabi et al. Nov 2007 A1
20070268882 Breslau et al. Nov 2007 A1
20080002670 Bugenhagen et al. Jan 2008 A1
20080049621 McGuire et al. Feb 2008 A1
20080080509 Khanna et al. Apr 2008 A1
20080095187 Jung et al. Apr 2008 A1
20080144532 Chamarajanagar et al. Jun 2008 A1
20080181116 Kavanaugh et al. Jul 2008 A1
20080219276 Shah Sep 2008 A1
20080240121 Xiong et al. Oct 2008 A1
20090013210 McIntosh et al. Jan 2009 A1
20090125617 Klessig et al. May 2009 A1
20090154463 Hines et al. Jun 2009 A1
20090247204 Sennett et al. Oct 2009 A1
20090274045 Meier et al. Nov 2009 A1
20090276657 Wetmore et al. Nov 2009 A1
20090303880 Maltz et al. Dec 2009 A1
20100008361 Guichard et al. Jan 2010 A1
20100017802 Lojewski Jan 2010 A1
20100046532 Okita Feb 2010 A1
20100061379 Parandekar et al. Mar 2010 A1
20100088440 Banks et al. Apr 2010 A1
20100091823 Retana et al. Apr 2010 A1
20100107162 Edwards et al. Apr 2010 A1
20100118727 Draves et al. May 2010 A1
20100191884 Holenstein et al. Jul 2010 A1
20100223621 Joshi et al. Sep 2010 A1
20100309841 Conte Dec 2010 A1
20100309912 Mehta et al. Dec 2010 A1
20100322255 Hao et al. Dec 2010 A1
20100332657 Elyashev et al. Dec 2010 A1
20110007752 Silva et al. Jan 2011 A1
20110032939 Nozaki et al. Feb 2011 A1
20110040814 Higgins Feb 2011 A1
20110075674 Li et al. Mar 2011 A1
20110107139 Middlecamp et al. May 2011 A1
20110110370 Moreno et al. May 2011 A1
20110141877 Xu et al. Jun 2011 A1
20110142041 Imai Jun 2011 A1
20110153909 Dong Jun 2011 A1
20120008630 Ould-Brahim Jan 2012 A1
20120027013 Napierala Feb 2012 A1
20120136697 Peles et al. May 2012 A1
20120157068 Eichen Jun 2012 A1
20120173694 Yan et al. Jul 2012 A1
20120173919 Patel et al. Jul 2012 A1
20120221955 Raleigh et al. Aug 2012 A1
20120250682 Vincent et al. Oct 2012 A1
20120250686 Vincent et al. Oct 2012 A1
20120281706 Agarwal et al. Nov 2012 A1
20120300615 Kempf et al. Nov 2012 A1
20120317291 Wolfe Dec 2012 A1
20130019005 Hui Jan 2013 A1
20130021968 Reznik et al. Jan 2013 A1
20130044764 Casado et al. Feb 2013 A1
20130051399 Zhang et al. Feb 2013 A1
20130054763 Merwe et al. Feb 2013 A1
20130086267 Gelenbe et al. Apr 2013 A1
20130103834 Dzerve et al. Apr 2013 A1
20130124718 Griffith et al. May 2013 A1
20130124911 Griffith et al. May 2013 A1
20130124912 Griffith et al. May 2013 A1
20130128889 Mathur May 2013 A1
20130142201 Kim et al. Jun 2013 A1
20130173788 Song Jul 2013 A1
20130182712 Aguayo et al. Jul 2013 A1
20130191688 Agarwal et al. Jul 2013 A1
20130238782 Zhao et al. Sep 2013 A1
20130242718 Zhang Sep 2013 A1
20130254599 Katkar et al. Sep 2013 A1
20130258839 Wang et al. Oct 2013 A1
20130283364 Chang et al. Oct 2013 A1
20130286846 Atlas et al. Oct 2013 A1
20130297770 Zhang Nov 2013 A1
20130301469 Suga Nov 2013 A1
20130301642 Radhakrishnan et al. Nov 2013 A1
20130308444 Sem-Jacobsen et al. Nov 2013 A1
20130315242 Wang et al. Nov 2013 A1
20130315243 Huang et al. Nov 2013 A1
20130329548 Nakil et al. Dec 2013 A1
20130329601 Yin et al. Dec 2013 A1
20130329734 Chesla Dec 2013 A1
20130346470 Obstfeld et al. Dec 2013 A1
20140019604 Twitchell, Jr. Jan 2014 A1
20140019750 Dodgson et al. Jan 2014 A1
20140040975 Raleigh et al. Feb 2014 A1
20140064283 Balus et al. Mar 2014 A1
20140092907 Sridhar et al. Apr 2014 A1
20140108665 Arora et al. Apr 2014 A1
20140112171 Pasdar Apr 2014 A1
20140115584 Mudigonda et al. Apr 2014 A1
20140126418 Brendel et al. May 2014 A1
20140156818 Hunt Jun 2014 A1
20140156823 Liu et al. Jun 2014 A1
20140164560 Ko et al. Jun 2014 A1
20140164617 Jalan et al. Jun 2014 A1
20140173113 Vemuri et al. Jun 2014 A1
20140173331 Martin et al. Jun 2014 A1
20140208317 Nakagawa Jul 2014 A1
20140219135 Li et al. Aug 2014 A1
20140223507 Xu Aug 2014 A1
20140229210 Sharifian et al. Aug 2014 A1
20140244851 Lee Aug 2014 A1
20140258535 Zhang Sep 2014 A1
20140269690 Tu Sep 2014 A1
20140279862 Dietz et al. Sep 2014 A1
20140280499 Basavaiah et al. Sep 2014 A1
20140317440 Biermayr et al. Oct 2014 A1
20140337500 Lee Nov 2014 A1
20140341109 Cartmell et al. Nov 2014 A1
20140372582 Ghanwani et al. Dec 2014 A1
20150003240 Drwiega et al. Jan 2015 A1
20150016249 Mukundan et al. Jan 2015 A1
20150029864 Raileanu et al. Jan 2015 A1
20150046572 Cheng et al. Feb 2015 A1
20150052247 Threefoot et al. Feb 2015 A1
20150052517 Raghu et al. Feb 2015 A1
20150056960 Egner et al. Feb 2015 A1
20150058917 Xu Feb 2015 A1
20150088942 Shah Mar 2015 A1
20150089628 Lang Mar 2015 A1
20150092603 Aguayo et al. Apr 2015 A1
20150096011 Watt Apr 2015 A1
20150124603 Ketheesan et al. May 2015 A1
20150134777 Onoue May 2015 A1
20150139238 Pourzandi et al. May 2015 A1
20150146539 Mehta et al. May 2015 A1
20150163152 Li Jun 2015 A1
20150169340 Haddad et al. Jun 2015 A1
20150172121 Farkas et al. Jun 2015 A1
20150172169 DeCusatis et al. Jun 2015 A1
20150188823 Williams et al. Jul 2015 A1
20150189009 Bemmel Jul 2015 A1
20150195178 Bhattacharya et al. Jul 2015 A1
20150201036 Nishiki et al. Jul 2015 A1
20150222543 Song Aug 2015 A1
20150222638 Morley Aug 2015 A1
20150236945 Michael et al. Aug 2015 A1
20150236962 Veres et al. Aug 2015 A1
20150244617 Nakil et al. Aug 2015 A1
20150249644 Xu Sep 2015 A1
20150271056 Chunduri et al. Sep 2015 A1
20150271104 Chikkamath et al. Sep 2015 A1
20150271303 Neginhal et al. Sep 2015 A1
20150312142 Barabash et al. Oct 2015 A1
20150317169 Sinha et al. Nov 2015 A1
20150334696 Gu et al. Nov 2015 A1
20150341271 Gomez Nov 2015 A1
20150349978 Wu et al. Dec 2015 A1
20150350907 Timariu et al. Dec 2015 A1
20150363733 Brown Dec 2015 A1
20150372943 Hasan et al. Dec 2015 A1
20150372982 Herle et al. Dec 2015 A1
20150381407 Wang et al. Dec 2015 A1
20150381493 Bansal et al. Dec 2015 A1
20160035183 Buchholz et al. Feb 2016 A1
20160036924 Koppolu et al. Feb 2016 A1
20160036938 Aviles et al. Feb 2016 A1
20160037434 Gopal et al. Feb 2016 A1
20160072669 Saavedra Mar 2016 A1
20160080502 Yadav et al. Mar 2016 A1
20160105353 Cociglio Apr 2016 A1
20160105392 Thakkar et al. Apr 2016 A1
20160105471 Nunes et al. Apr 2016 A1
20160134461 Sampath et al. May 2016 A1
20160134528 Lin et al. May 2016 A1
20160134591 Liao et al. May 2016 A1
20160142373 Ossipov May 2016 A1
20160164832 Bellagamba et al. Jun 2016 A1
20160164914 Madhav et al. Jun 2016 A1
20160173338 Wolting Jun 2016 A1
20160191363 Haraszti et al. Jun 2016 A1
20160191374 Singh et al. Jun 2016 A1
20160197834 Luft Jul 2016 A1
20160197835 Luft Jul 2016 A1
20160198003 Luft Jul 2016 A1
20160210209 Verkaik et al. Jul 2016 A1
20160218947 Hughes et al. Jul 2016 A1
20160218951 Vasseur et al. Jul 2016 A1
20160255169 Kovvuri et al. Sep 2016 A1
20160261493 Li Sep 2016 A1
20160261495 Xia et al. Sep 2016 A1
20160261639 Xu Sep 2016 A1
20160269298 Li et al. Sep 2016 A1
20160269926 Sundaram Sep 2016 A1
20160285736 Gu Sep 2016 A1
20160308762 Teng et al. Oct 2016 A1
20160315912 Mayya et al. Oct 2016 A1
20160323377 Einkauf et al. Nov 2016 A1
20160328159 Coddington et al. Nov 2016 A1
20160352588 Subbarayan et al. Dec 2016 A1
20160353268 Senarath et al. Dec 2016 A1
20160359738 Sullenberger et al. Dec 2016 A1
20160366187 Kamble Dec 2016 A1
20160371153 Dornemann Dec 2016 A1
20160380886 Blair et al. Dec 2016 A1
20160380906 Hodique et al. Dec 2016 A1
20170005986 Bansal et al. Jan 2017 A1
20170012870 Blair et al. Jan 2017 A1
20170019428 Cohn Jan 2017 A1
20170026283 Williams et al. Jan 2017 A1
20170026355 Mathaiyan et al. Jan 2017 A1
20170034046 Cai et al. Feb 2017 A1
20170034129 Sawant et al. Feb 2017 A1
20170053258 Carney et al. Feb 2017 A1
20170055131 Kong et al. Feb 2017 A1
20170063674 Maskalik et al. Mar 2017 A1
20170063782 Jain et al. Mar 2017 A1
20170063794 Jain et al. Mar 2017 A1
20170064005 Lee Mar 2017 A1
20170093625 Pera et al. Mar 2017 A1
20170097841 Chang et al. Apr 2017 A1
20170104755 Arregoces et al. Apr 2017 A1
20170118173 Arramreddy et al. Apr 2017 A1
20170123939 Maheshwari et al. May 2017 A1
20170126516 Tiagi et al. May 2017 A1
20170126564 Mayya et al. May 2017 A1
20170134186 Mukundan et al. May 2017 A1
20170134520 Abbasi et al. May 2017 A1
20170139789 Fries et al. May 2017 A1
20170155557 Desai et al. Jun 2017 A1
20170163473 Sadana et al. Jun 2017 A1
20170171310 Gardner Jun 2017 A1
20170181210 Nadella et al. Jun 2017 A1
20170195161 Ruel et al. Jul 2017 A1
20170195169 Mills et al. Jul 2017 A1
20170201585 Doraiswamy et al. Jul 2017 A1
20170207976 Rovner et al. Jul 2017 A1
20170214545 Cheng et al. Jul 2017 A1
20170214701 Hasan Jul 2017 A1
20170223117 Messerli et al. Aug 2017 A1
20170237710 Mayya et al. Aug 2017 A1
20170257260 Govindan et al. Sep 2017 A1
20170257309 Appanna Sep 2017 A1
20170264496 Ao et al. Sep 2017 A1
20170279717 Bethers et al. Sep 2017 A1
20170279803 Desai et al. Sep 2017 A1
20170280474 Vesterinen et al. Sep 2017 A1
20170289002 Ganguli et al. Oct 2017 A1
20170302565 Ghobadi et al. Oct 2017 A1
20170310641 Jiang et al. Oct 2017 A1
20170310691 Vasseur et al. Oct 2017 A1
20170317974 Masurekar et al. Nov 2017 A1
20170337086 Zhu et al. Nov 2017 A1
20170339054 Yadav et al. Nov 2017 A1
20170339070 Chang et al. Nov 2017 A1
20170364419 Lo Dec 2017 A1
20170366445 Nemirovsky et al. Dec 2017 A1
20170366467 Martin et al. Dec 2017 A1
20170374174 Evens et al. Dec 2017 A1
20180006995 Bickhart et al. Jan 2018 A1
20180007123 Cheng et al. Jan 2018 A1
20180014051 Phillips et al. Jan 2018 A1
20180034668 Mayya et al. Feb 2018 A1
20180041425 Zhang Feb 2018 A1
20180062914 Boutros et al. Mar 2018 A1
20180062917 Chandrashekhar et al. Mar 2018 A1
20180063036 Chandrashekhar et al. Mar 2018 A1
20180063193 Chandrashekhar et al. Mar 2018 A1
20180063233 Park Mar 2018 A1
20180069924 Tumuluru et al. Mar 2018 A1
20180074909 Bishop et al. Mar 2018 A1
20180077081 Lauer et al. Mar 2018 A1
20180077202 Xu Mar 2018 A1
20180084081 Kuchibhotla et al. Mar 2018 A1
20180097725 Wood et al. Apr 2018 A1
20180114569 Strachan et al. Apr 2018 A1
20180131608 Jiang et al. May 2018 A1
20180131615 Zhang May 2018 A1
20180131720 Hobson et al. May 2018 A1
20180145899 Rao May 2018 A1
20180159856 Gujarathi Jun 2018 A1
20180167378 Kostyukov et al. Jun 2018 A1
20180176073 Dubey et al. Jun 2018 A1
20180176082 Katz et al. Jun 2018 A1
20180176130 Banerjee et al. Jun 2018 A1
20180213472 Ishii et al. Jul 2018 A1
20180219765 Michael et al. Aug 2018 A1
20180219766 Michael et al. Aug 2018 A1
20180234300 Mayya et al. Aug 2018 A1
20180260125 Botes et al. Sep 2018 A1
20180262468 Kumar et al. Sep 2018 A1
20180270104 Zheng et al. Sep 2018 A1
20180278541 Wu et al. Sep 2018 A1
20180295529 Jen et al. Oct 2018 A1
20180302286 Mayya et al. Oct 2018 A1
20180302321 Manthiramoorthy et al. Oct 2018 A1
20180351855 Sood et al. Dec 2018 A1
20180351862 Jeganathan et al. Dec 2018 A1
20180351863 Vairavakkalai et al. Dec 2018 A1
20180351882 Jeganathan et al. Dec 2018 A1
20180373558 Chang et al. Dec 2018 A1
20180375744 Mayya et al. Dec 2018 A1
20180375824 Mayya et al. Dec 2018 A1
20180375967 Pithawala et al. Dec 2018 A1
20190014038 Ritchie Jan 2019 A1
20190020588 Twitchell, Jr. Jan 2019 A1
20190020627 Yuan Jan 2019 A1
20190028552 Johnson et al. Jan 2019 A1
20190036810 Michael et al. Jan 2019 A1
20190046056 Khachaturian et al. Feb 2019 A1
20190058657 Chunduri et al. Feb 2019 A1
20190058709 Kempf et al. Feb 2019 A1
20190068470 Mirsky Feb 2019 A1
20190068493 Ram et al. Feb 2019 A1
20190068500 Hira Feb 2019 A1
20190075083 Mayya et al. Mar 2019 A1
20190103990 Cidon et al. Apr 2019 A1
20190103991 Cidon et al. Apr 2019 A1
20190103992 Cidon et al. Apr 2019 A1
20190103993 Cidon et al. Apr 2019 A1
20190104035 Cidon et al. Apr 2019 A1
20190104049 Cidon et al. Apr 2019 A1
20190104050 Cidon et al. Apr 2019 A1
20190104051 Cidon et al. Apr 2019 A1
20190104052 Cidon et al. Apr 2019 A1
20190104053 Cidon et al. Apr 2019 A1
20190104063 Cidon et al. Apr 2019 A1
20190104064 Cidon et al. Apr 2019 A1
20190104109 Cidon et al. Apr 2019 A1
20190104111 Cidon et al. Apr 2019 A1
20190104413 Cidon et al. Apr 2019 A1
20190140889 Mayya et al. May 2019 A1
20190140890 Mayya et al. May 2019 A1
20190158605 Markuze et al. May 2019 A1
20190199539 Deng et al. Jun 2019 A1
20190220703 Prakash et al. Jul 2019 A1
20190238364 Boutros et al. Aug 2019 A1
20190238446 Barzik et al. Aug 2019 A1
20190238449 Michael et al. Aug 2019 A1
20190238450 Michael et al. Aug 2019 A1
20190268421 Markuze et al. Aug 2019 A1
20190280962 Michael et al. Sep 2019 A1
20190280963 Michael et al. Sep 2019 A1
20190280964 Michael et al. Sep 2019 A1
20190313907 Khachaturian et al. Oct 2019 A1
20190319847 Nahar et al. Oct 2019 A1
20190356736 Narayanaswamy et al. Nov 2019 A1
20190364099 Thakkar et al. Nov 2019 A1
20190372888 Michael et al. Dec 2019 A1
20190372889 Michael et al. Dec 2019 A1
20190372890 Michael et al. Dec 2019 A1
20200014615 Michael et al. Jan 2020 A1
20200014616 Michael et al. Jan 2020 A1
20200014661 Mayya et al. Jan 2020 A1
20200021514 Michael et al. Jan 2020 A1
20200021515 Michael et al. Jan 2020 A1
20200036624 Michael et al. Jan 2020 A1
20200059420 Abraham Feb 2020 A1
20200059459 Abraham et al. Feb 2020 A1
20200092207 Sipra et al. Mar 2020 A1
20200097327 Beyer et al. Mar 2020 A1
20200099659 Cometto et al. Mar 2020 A1
20200106696 Michael et al. Apr 2020 A1
20200106706 Mayya et al. Apr 2020 A1
20200119952 Mayya et al. Apr 2020 A1
20200127905 Mayya et al. Apr 2020 A1
20200153736 Liebherr et al. May 2020 A1
20200177503 Hooda et al. Jun 2020 A1
20200204460 Schneider et al. Jun 2020 A1
20200213212 Dillon et al. Jul 2020 A1
20200218558 Sreenath et al. Jul 2020 A1
20200235990 Janakiraman et al. Jul 2020 A1
20200235999 Mayya et al. Jul 2020 A1
20200236046 Jain et al. Jul 2020 A1
20200244721 S et al. Jul 2020 A1
20200267184 Vera-Schockner Aug 2020 A1
20200280587 Janakiraman et al. Sep 2020 A1
20200296011 Jain et al. Sep 2020 A1
20200296026 Michael et al. Sep 2020 A1
20200314614 Moustafa et al. Oct 2020 A1
20200351188 Arora et al. Nov 2020 A1
20200366562 Mayya et al. Nov 2020 A1
20200382345 Zhao et al. Dec 2020 A1
20200382387 Pasupathy et al. Dec 2020 A1
20210006490 Michael et al. Jan 2021 A1
20210029088 Mayya et al. Jan 2021 A1
20210067372 Cidon et al. Mar 2021 A1
20210067373 Cidon et al. Mar 2021 A1
20210067374 Cidon et al. Mar 2021 A1
20210067375 Cidon et al. Mar 2021 A1
20210067407 Cidon et al. Mar 2021 A1
20210067427 Cidon et al. Mar 2021 A1
20210067461 Cidon et al. Mar 2021 A1
20210067464 Cidon et al. Mar 2021 A1
20210067467 Cidon et al. Mar 2021 A1
20210067468 Cidon et al. Mar 2021 A1
Foreign Referenced Citations (11)
Number Date Country
1912381 Apr 2008 EP
3041178 Jul 2016 EP
3509256 Jul 2019 EP
03073701 Sep 2003 WO
2012167184 Dec 2012 WO
2017083975 May 2017 WO
2019070611 Apr 2019 WO
2019094522 May 2019 WO
2020018704 Jan 2020 WO
2020101922 May 2020 WO
2021040934 Mar 2021 WO
Non-Patent Literature Citations (35)
Entry
Mudigonda, Jayaram, et al., “NetLord: A Scalable Multi-Tenant Network Architecture for Virtualized Datacenters,” Proceedings of the ACM SIGCOMM 2011 Conference, Aug. 15-19, 2011, 12 pages, ACM, Toronto, Canada.
Non-Published Commonly Owned U.S. Appl. No. 16/662,363, filed Oct. 24, 2019, 129 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,379, filed Oct. 24, 2019, 123 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,402, filed Oct. 24, 2019, 128 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,427, filed Oct. 24, 2019, 165 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,489, filed Oct. 24, 2019, 165 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,510, filed Oct. 24, 2019, 165 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,531, filed Oct. 24, 2019, 135 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,570, filed Oct. 24, 2019, 141 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,587, filed Oct. 24, 2019, 145 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/662,591, filed Oct. 24, 2019, 130 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/721,964, filed Dec. 20, 2019, 39 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/721,965, filed Dec. 20, 2019, 39 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/792,908, filed Feb. 18, 2020, 48 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/792,909, filed Feb. 18, 2020, 49 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,294, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,301, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,308, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,314, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,323, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Non-Published Commonly Owned U.S. Appl. No. 16/851,397, filed Apr. 17, 2020, 59 pages, VMware, Inc.
Petition for Post-Grant Review of U.S. Pat. No. 9,722,815, filed May 1, 2018, 106 pages.
Del Piccolo, Valentin, et al., “A Survey of Network Isolation Solutions for Multi-Tenant Data Centers,” IEEE Communications Society, Apr. 20, 2016, vol. 18, No. 4, 37 pages, IEEE.
Fortz, Bernard, et al., “Internet Traffic Engineering by Optimizing OSPF Weights,” Proceedings IEEE INFOCOM 2000, Conference on Computer Communications, Nineteenth Annual Joint Conference of the IEEE Computer and Communications Societies, Mar. 26-30, 2000, 11 pages, IEEE, Tel Aviv, Israel, Israel.
Francois, Frederic, et al., “Optimizing Secure SDN-enabled Inter-Data Centre Overlay Networks through Cognitive Routing,” 2016 IEEE 24th International Symposium on Modeling, Analysis and Simulation of Computer and Telecommunication Systems (MASCOTS), Sep. 19-21, 2016, 10 pages, IEEE, London, UK.
Michael, Nithin, et al., “HALO: Hop-by-Hop Adaptive Link-State Optimal Routing,” IEEE/ACM Transactions on Networking, Dec. 2015, 14 pages, vol. 23, No. 6, IEEE.
Mishra, Mayank, et al., “Managing Network Reservation for Tenants in Oversubscribed Clouds,” 2013 IEEE 21st International Symposium on Modelling, Analysis and Simulation of Computer and Telecommunication Systems, Aug. 14-16, 2013, 10 pages, IEEE, San Francisco, CA, USA.
Non-Published Commonly Owned U.S. Appl. No. 17/068,603, filed Oct. 12, 2020, 37 pages, Nicira, Inc.
Ray, Saikat, et al., “Always Acyclic Distributed Path Computation,” University of Pennsylvania Department of Electrical and Systems Engineering Technical Report, May 2008, 16 pages, University of Pennsylvania ScholarlyCommons.
Webb, Kevin C., et al., “Blender: Upgrading Tenant-Based Data Center Networking,” 2014 ACM/IEEE Symposium on Architectures for Networking and Communications Systems (ANCS), Oct. 20-21, 2014, 11 pages, IEEE, Marina del Rey, CA, USA.
Yap, Kok-Kiong, et al., “Taking the Edge off with Espresso: Scale, Reliability and Programmability for Global Internet Peering,” SIGCOMM '17: Proceedings of the Conference of the ACM Special Interest Group on Data Communication, Aug. 21-25, 2017, 14 pages, Los Angeles, CA.
Huang, Cancan, et al., “Modification of Q.SD-WAN,” Rapporteur Group Meeting—Doc, Study Period 2017-2020, Q4/11-DOC1 (190410), Study Group 11, Apr. 10, 2019, 19 pages, International Telecommunication Union, Geneva, Switzerland.
Sarhan, Soliman Abd Elmonsef, et al., “Data Inspection in SDN Network,” 2018 13th International Conference on Computer Engineering and Systems (ICCES), Dec. 18-19, 2018, 6 pages, IEEE, Cairo, Egypt.
Xie, Junfeng, et al., A Survey of Machine Learning Techniques Applied to Software Defined Networking (SDN): Research Issues and Challenges, IEEE Communications Surveys & Tutorials, Aug. 23, 2018, 38 pages, vol. 21, Issue 1, IEEE.
Lin, Weidong, et al., “Using Path Label Routing in Wide Area Software-Defined Networks with Open Flow,” 2016 International Conference on Networking and Network Application, Jul. 2016, 6 pages, IEEE.
Related Publications (1)
Number Date Country
20200366530 A1 Nov 2020 US
Provisional Applications (1)
Number Date Country
61844822 Jul 2013 US
Continuations (1)
Number Date Country
Parent 15221608 Jul 2016 US
Child 16945700 US
Continuation in Parts (1)
Number Date Country
Parent 14321818 Jul 2014 US
Child 15221608 US