The present technology pertains to load balancing, and more specifically to load balancing using segment routing and real-time application monitoring.
The ubiquity of Internet-enabled devices has created an enormous demand for Internet services and content. In many ways, we have become a connected society where users are increasingly reliant on network services and content. This Internet-connected revolution has created significant challenges for service and content providers who often struggle to service a high volume of user requests without falling short of user performance expectations. For example, providers typically need large and complex datacenters to keep up with network and content demands from users. These datacenters are generally equipped with server farms configured to host specific services, and include numerous switches and routers configured to route traffic in and out of the datacenters. In many instances, a specific datacenter is expected to handle millions of traffic flows and service requests.
Not surprisingly, such large volumes of data can be difficult to manage and create significant performance degradations and challenges. Load balancing solutions may be implemented to improve performance and service reliability in a datacenter. However, current load balancing solutions are prone to node failures and lack adequate bi-directional flow stickiness.
In order to describe the manner in which the above-recited and other advantages and features of the disclosure can be obtained, a more particular description of the principles briefly described above will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. Understanding that these drawings depict only exemplary embodiments of the disclosure and are not therefore to be considered to be limiting of its scope, the principles herein are described and explained with additional specificity and detail through the use of the accompanying drawings in which:
Various embodiments of the disclosure are discussed in detail below. While specific implementations are discussed, it should be understood that this is done for illustration purposes only. A person skilled in the relevant art will recognize that other components and configurations may be used without parting from the spirit and scope of the disclosure.
Additional features and advantages of the disclosure will be set forth in the description which follows, and in part will be obvious from the description, or can be learned by practice of the herein disclosed principles. The features and advantages of the disclosure can be realized and obtained by means of the instruments and combinations particularly pointed out in the appended claims. These and other features of the disclosure will become more fully apparent from the following description and appended claims, or can be learned by the practice of the principles set forth herein.
Disclosed herein are systems, methods, and computer-readable media for load balancing using segment routing and real-time application monitoring. In some examples, a method can involve receiving a packet including a request from a source device to an application associated with a virtual address in a network, and mapping the request to a set of candidate servers hosting the application associated with the virtual address. For example, the packet associated with the request can be hashed and the hash value used to identify a particular hash bucket corresponding to that packet. The hash bucket can include a segment routing policy identifying multiple candidate servers for the packet. The set of candidate servers can thus be identified from the segment routing policy in the particular hash bucket corresponding to the packet.
The method can further involve encoding the set of candidate servers as a list of segments in a segment routing header associated with the packet. For example, a segment routing header can be inserted into the packet. The segment routing header can identify a list of segments which can include the set of candidate servers identified for the packet. The list of segments in the segment routing header can enable the packet to be successively routed through the set of candidate servers, to allow each receiving server to make a local load-balancing decision to accept or reject the request associated with the packet. The list of segments can also include one or more segment routing functions for successively steering the packet through the set of candidate servers until one of the set of candidate servers accepts the request. The segment routing functions can provide instructions to a receiving node, identifying a particular action to be taken by that node upon receipt of the packet.
The method can also involve determining that a first candidate server from the set of candidate servers is a next segment in the list of segments, encoding the first candidate server in a destination address field on an IPv6 header of the packet, and forwarding the packet to the first candidate server. The destination address field can represent a next routing segment for the packet in order to route the packet to the next routing segment. When the first candidate server receives the packet, it can make a load-balancing decision to accept or deny the request associated with the packet. If the first candidate server denies the request, it can forward the packet to the next candidate server, which can be identified from the list of segments in the segment routing header. The packet can be routed through the candidate servers until a candidate server accepts the request. When a server accepts the request, it can forward the packet to the application on that server and send a reply indicating that the server has accepted the request. The reply can be transmitted to a load balancer in the network which initially routed the packet to the set of candidate servers. The reply can include a segment routing header and segment routing instructions to indicate that the server has accepted the connection, and allow the load balancer create a sticky entry for that flow which identifies the accepting server as the server associated with the flow. This can ensure bi-directional stickiness for the flow.
The disclosed technology addresses the need in the art for accurate and efficient application-aware load balancing. The present technology involves system, methods, and computer-readable media for application-aware load balancing using segment routing and application monitoring. The present technology will be described in the following disclosure as follows. The discussion begins with an introductory overview of application-aware load balancing using segment routing and Internet Protocol version 6 (IPv6). A description of an example computing environment, as illustrated in
The approaches herein can utilize segment routing (SR) to steer connection or communication requests towards multiple servers selected by a load balancer to service the requests, which can receive the requests and either accept or deny the requests based on one or more factors, such as current and future loads, server capabilities, resource availability, etc. A request will traverse through the multiple servers identified in the SR packet or header until a server accepts the request. The load-balancing approaches herein can implement IPv6 and SR, which are further described below, to steer requests efficiently while limiting state information and avoiding sequencing or ordering errors in connection-oriented communications, for example.
Load balancers can pseudo-randomly generate different segment routing lists that are used between load balancers (LBs) and segment routing (SR) nodes. SR nodes (or ASs) can accept or reject incoming connections based on actual application server load as well as the expected load to serve the request. Stickiness at load balancers can be obtained by modifying the message sent by application servers accepting a new connection toward the LBs, such that any further packet from the same connection is sent using a segment routing list including the application server's ‘accepted connection address’.
For example, a flow can be hashed to multiple servers. The use of multiple servers can improve reliability and load-balancing fairness. The load balancer can receive the flow and forward the flow to the first server. The first server can decide whether to accept the connection based on one or more factors, such as current load, future load, predicted load, the flow, computing resources, etc. The second server can serve as backup in case the first server is not able to accept the connection.
To illustrate, IPv6 SR can select the correct server out of 2 servers, e.g., using IPv6 SR END.S SID (i.e., “SR hunting”). The TCP SYN packet in an SR packet can include the 2 servers as segment identifiers (SIDs). For example, the IPv6 header can include SA (Source Address)=C::, DA (Destination Address)=S1 (Server 1); and an SR Header can include (VIP, S2, S1) SL=2, where VIP is the virtual address of an application in the request, S2 and S1 are candidate servers hosting the application associated with that virtual address, and SL identifies the number of candidate servers.
Hash buckets can be generated and mapped to multiple, respective servers. In this example, each hash bucket can be bound to an SR policy identifying two candidate servers. The packet can be hashed to identify a hash bucket corresponding to that packet, and the SR policy of that hash bucket can be used to identify the candidate servers for the packet. The candidate servers can be included in a list of segments within an SR header or packet, which can be used to steer the packet to the candidate servers. The candidate servers can decide whether to accept or reject the connection as when they receive the packet as it is forwarded to the different candidate servers. For example, based on the application state of the first server in the SR policy, the first server either forwards the packet to the second server or to the first server's local application. In this way, segment routing and IPv6 can be implemented for intelligent, application-aware load balancing. IPv6 and segment routing are further described below.
IPv6 Environment
In an IPv6 environment, such as an IPv6-centric data center, servers can be reached via an IPv6 physical prefix. The servers can also run application services in isolated environments, such as virtual machines (VMs) or software containers, which can be assigned an IPv6 virtual address (VIP). In some cases, a virtual switch (e.g., Open vSwitch, vector packet processing, etc.) can be deployed on a server to route packets between physical and virtual interfaces on the server. This allows the network (e.g., data center) to be fully Layer-3 routed, without having to deploy Layer-2 tunnels such as VLANs or VXLANs.
Routing the VIPs corresponding to the different applications running in the data center can be achieved in several manners. In some examples, the virtual switches can run Interior Gateway Protocol (IGP) to propagate direct routes to the VIPs. Other examples may use a mobility protocol, such as Identifier-Locator Addressing for IPv6, wherein edge routers perform the translation between physical and virtual addresses. As will be further explained below, the approaches herein implement segment routing to steer packets through a predetermined path including multiple candidate servers for load balancing.
Segment Routing (SR)
SR is a source-routing paradigm, initially designed for traffic engineering, which allows for a packet to follow a predefined path, defined by a list of segments, inside an SR domain. The approaches herein leverage the SR architecture and IPv6 connectivity for accurate and efficient application-aware load balancing.
SR and IPv6 can be leveraged together by implementing an IPv6 header in an SR packet or an SR header in an IPv6 packet. For example, in some cases, an IPv6 extension header can be implemented to identify a list of segments for SR and a counter SegmentsLeft, indicating the number of remaining segments to be processed until the final destination of the packet is reached. In an SR packet, the IPv6 destination address can be overwritten with the address of the next segment. This way, the packet can go through SR-unaware routers until reaching the next intended SR hop. Upon receipt of an SR packet, an SR-aware router will set the destination address to the address of the next segment, and decrease the SegmentsLeft counter. When the packet reaches the last SR hop, the final destination of the packet is copied to the IPv6 destination address field. Depending on the value of a flag in the header, the SR header can be stripped by the last SR hop so that the destination receives a vanilla IPv6 packet.
To perform application-aware load balancing with minimal overhead, the network can decide to which application a request should be assigned, without requiring out-of-band centralized monitoring. We introduce a concept further described below, which will be referred to herein as “service hunting”, that uses the SR architecture for application-aware load balancing.
To illustrate, assume that an application is running in several different candidate physical servers and the same VIP is used for all the application replicas. Moreover, assume that a load-balancing device resides at the edge of the data center or network, and traffic to the VIP is routed towards this load-balancing device. When the load-balancing device receives a connection request for the VIP, the load-balancing device can select a subset of the candidate servers running the application, and insert an SR header with the physical addresses of the candidate servers. This will steer the connection request packet successively through the candidate servers.
When the request reaches one of the candidate servers, rather than simply forwarding the packet to the next server in the list, the virtual switch on the candidate server can bypass the rest of the SR list and deliver the packet to the virtual interface corresponding to the server's instance of the application. The server can then locally decide whether to accept the connection or reject the connection and forward the request to the next candidate in the SR list. In some cases, the server can make such decisions based on a policy shared between the virtual switch and the application. If the server rejects the connection, it can forward the request to the next segment in the SR list, and the packet can traverse the servers in the SR list until a candidate server accepts the connection or the packet reaches the last segment in the SR list. To ensure all requests are satisfied, the last server in the list may not be allowed to refuse the connection and instead forced to accept the connection. Upon accepting a connection, the accepting server can signal to the load-balancer that the accepting server has accepted the connection, to ensure that further packets corresponding to this flow can be directly steered to the accepting server without traversing the load-balancing device and/or additional candidate servers.
This mechanism allows connection requests to be transparently delivered to several candidate servers, until finding a candidate server that is available to accept the connection. The decision to accept or reject a connection can be made locally by the individual server, in a decentralized fashion. This mechanism brings application-awareness directly to the network, and improves the load balancing across the data center or network, without requiring a centralized application monitoring system.
The application-aware load balancing approach herein can implement forwarder-side functionalities and/or server-side functionalities. For example, a forwarder service (e.g., load balancer, forwarder module, etc.) can dispatch connection requests and subsequent packets to specific servers, and the candidate servers can run a service or logic associated with the server's virtual switch, which can couple with the application in order to perform service hunting services.
The forwarder can be horizontally scaled into any number of instances. Routers at the edge of the data center or network can route traffic destined to an applications' VIPs to a forwarder. If several forwarders are deployed in the data center or network, routing protocols such as equal-cost multi-path routing (ECMP), can be used to evenly distribute the traffic among forwarders. Consistent hashing can be used by the forwarders to select candidate application servers for each flow, and perform service hunting on the selected candidate application servers.
The client 102 can connect with routers 106-1 through 106-N (collectively “106” hereinafter) in the data center 120 via a network 104. The client 102 can be any computing device, such as a laptop, a desktop, a tablet computer, a mobile phone, a server, a smart device (e.g., smart television, smart watch, etc.), an internet of things (IoT) device, a remote network or data center, etc. The network 104 can include any number or type of networks, such as a private network (e.g., local area network), a public network (e.g., the Internet), a hybrid network (e.g., virtual private network), a cloud network, etc.
The routers 106 can serve as edge devices in the data center 120, and route traffic to and from the data center 120. Thus, the routers 106 can connect the data center 120 with network 104, client 102, and any other external networks or devices. The routers 106 can serve as the egress and ingress point for the data center 120. The routers 106 can also route traffic internally within the data center 120 to other routers or switches, network devices or services (e.g., appliances, firewalls, load balancers, etc.), and application servers 110-1 through 110-N (collectively “110” hereinafter) in the data center 120.
The application servers 110 can include physical machines or resources hosting applications, isolated environments, or services in the data center 120. For example, the application servers 110 can be physical servers running various applications in the data center 120. The application servers 110 can run some or all of their applications in isolated environments, such as VMs or software containers. In some cases, an application can by hosted by, and/or run on, multiple application servers 110 in the data center 120. For example, multiple application servers 110 can run instances of an application (e.g., virtual instances, replicas, parallel instances, mirror instances, etc.). To illustrate, an application can run on multiple application servers 110, to allow the multiple application servers 110 to load balance application traffic, and/or provide redundancy (e.g., backup or standby), fault-tolerance, high-availability, scalability, etc., for the application. The multiple application servers 110 can run the full application or instance of the application, or a portion of the application, such as a function in a service chain configuration.
The application servers 110 can include a physical network interface (e.g., NIC) to communicate with other devices or services (e.g., devices or services in the network environment 100). The physical network interface can be assigned a physical prefix or network address for such communications. The application servers 110 can also include one or more virtual interfaces (e.g., vNICs) which can provide virtualized or abstract representations of network interfaces and connections. Virtual interfaces can provide added flexibility and network capabilities, as well as various other benefits or services, such as aggregation of links or data, isolation of data or networks, decoupling of application and system traffic, expansion of network interfaces, network redundancy, dedicated links, and so forth. Virtual interfaces can be assigned virtual addresses (e.g., VIPs) in the data center 120. The virtual addresses can identify the virtual interfaces as well as any applications or isolated environments associated with the virtual addresses on the application servers 110.
For example, an application can be assigned a virtual address in the data center 120, which can be used to identify the application in the data center 120 and route traffic to and from the application. The virtual address can be used to steer traffic to and from a virtual instance of the application running on one or more of the application servers 110. In some cases, the virtual address can be mapped to the same application on multiple application servers 110, and can be used to communicate with an instance of the application on any of the multiple application servers 110.
The application servers 110 can include a virtual switch, such as OVS or VPP, which can route traffic to and from the application servers 110. For example, a virtual switch can route traffic between physical and virtual network interfaces on an application server, between applications and/or isolated environments on the application server, and between the application server and devices or applications outside of the application server. To illustrate, an application server can run multiple workloads (e.g., applications in different VMs or containers) assigned to different virtual interfaces and virtual addresses. A virtual switch on the application server can route traffic to and from the different workloads by translating the virtual addresses of the workloads and communicating with the virtual interfaces as well as other network interfaces such as the physical network interface(s) on the application server.
The data center 120 can also include load balancers 108-1 through 108-N (collectively “108” hereinafter). The load balancers 108 can communicate traffic between the routers 106 and the application servers 110. Moreover, the load balancers 108 can provide load balancing and forwarding services for traffic associated with the application servers 110. The load balancers 108 can select application servers 110 for a given flow to distribute flows and loads between the application servers 110 and steer traffic accordingly. The load balancers 108 can provide forwarding services using one or more server selection policies, including service hunting and/or consistent hashing, as further described below.
Load balancer 108-N then adds the set of candidate servers 204 to an SR list included in an SR header, as further described below, and forwards the flow with the SR header to the first segment in the SR header (e.g., application server 110-4). The SR header steers the flow using segment routing and IPv6 through the path 202 until application server 110-3 or application server 110-4 accepts the flow. In this example, the SR header will steer the flow first through application server 110-4, which will make a determination to accept or reject the flow based on one or more factors, such as an acceptance policy, a current workload, a capacity, a projected workload, etc. If application server 110-4 accepts the flow, it will establish a connection with client 102 and process the flow. The application server 110-4 can inform the load balancer 108-N that it has accepted the flow and will be able to communicate directly with client 102 until the connection is terminated.
On the other hand, if the application server 110-4 rejects the flow, it then forwards the flow to the next segment listed in the SR header, which in this example is application server 110-3. The SR header will therefore steer the flow through the path 202 from application server 110-4 to application server 110-3. Since application server 110-3 is the last segment in the SR list, given that the set of candidate servers 204 in this example only includes application server 110-4 and application server 110-3, the application server 110-3 may be forced to accept the flow to ensure the flow is accepted and processed. However, if application server 110-3 was not the last segment, then it could decide to accept or reject similar to application server 110-4.
To identify the set of candidate servers 204 for the flow and generate the SR list for the SR header used to steer the flow towards the set of candidate servers 204 for load balancing, the load balancer 108-N can implement a server selection policy, referenced herein as service hunting 206, as well as a hashing mechanism, referenced herein as consistent hashing.
Service Hunting
Service hunting 206 allows the set of candidate servers 204 to be selected from the application servers 110 in an application-aware fashion. Service hunting 206 allows the load balancers 108 to select multiple candidate servers for a given flow or connection, while maintaining a low overhead. The load balancers 108 can build an SR list with two or more random servers from the application servers 110 and/or a larger set of the application servers 110 running a particular application to be load balanced. For example, load balancer 108-N can build an SR list including the set of candidate servers 204 for the flow from client 102.
The load balancer 108-N can use a random or pseudo-random hashing function to map the flow (e.g., a transport control protocol (TCP) flow) identified by the flow's 5-tuple, to a list of physical prefixes corresponding to the set of candidate servers 204 hosting the application associated with the flow. The flow can then be assigned to the set of candidate servers 204 associated with the list of physical prefixes mapped to the flow. This assignment of multiple candidate servers to a flow can improve overall load repartitioning.
In some cases, a specific one of the set of candidate servers 204 assigned to the flow can be selected as the first or primary candidate server, and the other candidate server(s) can be selected to serve as backup or secondary candidate servers for load balancing. In other cases, the set of candidate servers 204 can be ordered, ranked, or prioritized for the flow. In still other cases, the set of candidate servers 204 may be randomly ordered or sequenced, or simply selected with no particular ordering or sequencing defined for them.
Consistent Hashing
Consistent hashing can allow the load balancers 108 to dynamically scale the number of instances of applications or servers to meet dynamic load requirements in the data center 120. When an application or server instance is added or removed in the data center 120, an ECMP function can be performed by the routers 106 at the edge of the data center 120 to rebalance existing flows between the load balancers 108. Consistent hashing can ensure that the mapping from flows to SR lists of candidate servers is consistent across the load balancers 108. Consistent hashing also provides mapping that is resilient to modifications to the set of candidate servers 204, which ensures that adding or removing an application server has a minimal impact on the mapping of previously existing flows.
With consistent hashing, each flow can be mapped to C servers where C is greater than one. Consistent hashing can be used to produce an SR list of candidate servers of any size, which can vary in different examples. For clarity and explanation purposes, we will use C=2 in our examples herein, which yields two candidate servers being produced in the SR list of candidate servers. Below is an example algorithm for consistent hashing:
The lookup table 302 contains buckets 304 containing SR policies 306. The SR policies 306 identify a set of two or more candidate servers assigned to the respective buckets 304. In this example, the lookup table 302 maps application servers 110-1, 110-2, and 110-3 to buckets 0-6, with two application servers being mapped to each bucket. The SR policies 306 in the lookup table 302 identify the specific application servers mapped to a bucket. In some examples, the SR policies 306 can identify the application servers assigned to each bucket based on the network address of each application server. For example, the SR policies 306 can include the physical network prefix or address of each application server, to identify the application servers by their physical network prefix or address.
To perform a lookup in the lookup table 302 for flow 310, the load balancer 108-1 can hash the flow (e.g., hash the N-tuple of the flow) and map the hashed flow to a particular bucket. In this example, the hash of the flow 310 yields a match 308 with bucket 3, which contains application server 110-1 (S1) and application server 110-2 (S2). Thus, the load balancer 108-1 can map the flow 310 to bucket 3 and assign application server 110-1 (S1) and application server 110-2 (S2) from bucket 3 to the flow 310. The load balancer 108-1 can then use the information from the match 308, which maps the flow 310 to the application server 110-1 (S1) and application server 110-2 (S2), to generate the SR list for the flow 310. The SR list will contain the application server 110-1 (S1) and application server 110-2 (S2) from bucket 3, based on the match 308 determined from the hash of the flow 310.
Lookup Tables
Lookup tables can be generated to identify SR lists of candidate servers using the consistent hashing. The lookup tables can include hash buckets which can be mapped to a set of candidate servers selected for each particular hash bucket. An example lookup table can be generated as follows.
Consider M buckets and N servers. For each server iϵ{0, . . . , N−1}, a pseudo-random permutation p[i] of {0, . . . M−1} is generated. These permutations can then be used to generate a lookup tablet: {0, . . . , M−1}→{0, . . . , N−1}C that maps each bucket to a list of C servers, following the procedure described above in Algorithm 1 for consistent hashing.
The table t can then be used to assign SR lists of application servers to flows: each network flow can be assigned an SR list by hashing the network flow (e.g., hashing the 5-tuple of the network flow) into a bucket j and taking the corresponding list t[j]. Hashing can be performed using a static hash function common to all load balancers 108, for example.
In some cases, the lookup table t can be generated by browsing through the application servers 110 in a circular fashion, making the application servers 110 successively “pick” buckets in their permutation until finding a bucket that has not yet been assigned C servers. Once each bucket has been assigned C servers, the algorithm can terminate.
Assume that flows are assigned to the first or second server in their SR lists with equal probability, and consider how flows mapped to a non-removed server (e.g., servers 2, 3, and 4 in this example) are affected by the table recomputation reflected in permutation table 322 and lookup table 332. For each bucket, we count one failure for each non-removed server appearing in the lookup table before recomputation, but not after recomputation.
The list of segments 414 in the SR header 406 can be used by nodes in an SR domain and/or SR-aware nodes to steer the packet 400 through the candidate servers 110-1 and 110-2 in the list of segments 414 and toward the application address 416. The candidate servers in the list of segments 414 can be identified using a lookup table, as previously explained. The segments field 412 can also include a counter 418, which can identify the segments left (i.e., SegmentsLeft).
The IPv6 header 404 can include a source address field 410 and a destination address field 408. The source address field 410 can identify the source of the packet 400, such as client 102. The source address field 410 can include a network address of the original source of the packet 400, a return destination for the packet 400, and/or a current source or sender of the packet 400. The source field 410 can also include commands or functions to be implemented by the node identified in the source field 410, as will be further described below.
The destination address field 408 can identify the next segment or node from the list of segments 414. In this example, the destination address field 408 identifies server 110-1 (S1) which is the first candidate server mapped to the packet 400. The destination address field 408 can be used to steer the packet 400 to the next destination. The destination field 408 in the IPv6 header 404 can allow the packet 400 to be routed even if the packet 400 traverses SR-unaware nodes.
The destination address field 408 can include a network prefix of the identified node or segment. For example, the destination address field 408 can include the physical prefix of server 110-1 (S1). This will ensure that the packet 400 is transmitted to the first candidate server, server 110-1 (S1), as the first destination server for the packet 400. The server 110-1 (S1) will have an opportunity to accept or reject the packet 400, as will be further described below. If the server 110-1 (S1) rejects the packet, the server 110-1 (S1) can then forward the packet 400 to the next segment in the list of segments 414, which in this example is server 110-2 (S2). When forwarding the packet, the server 110-1 (S1) can overwrite the destination address field 408 on the IPv6 header 404 to identify the server 110-2 (S2) as the destination, which ensures that the packet 400 is routed to the second candidate server, server 110-2 (S2). Server 110-2 (S2) will thereafter receive the packet 400 and determine whether to accept or reject (if permitted) the packet 400. This way, the list of segments 414 in the SR header 406 as well as the destination address field 408 in the IPv6 header 404 can be used to push the packet 400 to the set of candidate servers selected for that packet 400 and allow the set of candidate servers perform load balancing decisions for the packet 400 as it traverses the list of segments 414.
As will be further explained, the list of segments 414 and/or destination address field 408 can include functions or commands (hereinafter “SR functions”) to be implemented by associated nodes or segments. For example, the destination address field 408 can identify server 110-1 (S1) and include a function to be applied by server 110-1 (S1), such as a connect function which server 110-1 (S1) can interpret as a request to connect with server 110-1 (S1). The destination address field 408 can thus contain the state of the packet 400, including the next destination of the packet, the source or return node, and any commands or functions for such nodes or segments.
Similarly, the list of segments 414 can include commands or functions for the segments in the list of segments 414. For example, the list of segments 414 can include a connect function for each of the candidate servers, a force connect function for the last segment in the list of segments 414, one or more parameters for one or more segments (e.g., resource identifier, flow identifier, etc.), state information (e.g., ACK, SYN, etc.), and so forth.
SR functions can encode actions to be taken by a node directly in the SR header 406 and/or the IPv6 header 404. In some examples, each node is assigned an entire IPv6 prefix. Accordingly, the lower-order bytes in the prefix can be used to designate different SR functions. In some cases, the SR functions may depend on the address of the first segment in the list of segments 414 (e.g., the “sender” of the function).
To illustrate, when a node whose physical prefix is s receives a packet with the SR header 406 containing (x, . . . , s::ƒ, . . . ), the SR header 406 will trigger node s to perform a function ƒ with argument x, denoted by s.f(x).
The node prefix 426 can include the physical prefix of the next segment or server, as well as an application identifier. The SR function 428 can include state information associated with the node prefix 426. The third segment 424 can be further segmented into sub-segments 432,434, which can include arguments for the SR function 428, such as CPU id, Flow id, etc. The arguments can be used for flow and resource (e.g., CPU) steering and load balancing.
A handshake process between the load balancers 108 and the servers 110 can be implemented to establish flow stickiness. Flow stickiness can be accomplished while avoiding external control traffic from being generated, minimizing deep packet inspection, and steering return traffic to avoid the load balancers 108.
External control traffic can be avoided and deep packet inspection minimized by using the SR header 406 as opposed to sending custom control packets. The SR functions 428 can be used to communicate between the load balancer or forwarder nodes and the server nodes.
The connection request 502 can include the source 504, which identifies the client 102, and the destination 506, which identifies the virtual address of the application 508. In this example, the connection request 502 is a TCP SYN packet for establishing a TCP handshake.
The load balancer 108-N can receive the connection request 502 and identify candidate servers 110-1 through 110-3 hosting application 508 at the virtual address by applying a hashing function to the virtual address and identifying the candidate servers 110-1 through 110-3 from a bucket that maps to the hashed virtual address. The identified candidate servers can be used to generate a list of segments for an SR header.
Referring to
The load balancer 108-N will then forward the request 502 to the first candidate server 110-1 (S1). When the candidate server 110-1 (S1) receives the request, it will determine whether to accept or reject the request. If the server 110-1 (S1) is busy or unable to accept the request 502, it will forward the request 502 to the next segment in the list of segments from the SR header 510.
Referring to
Referring to
In
This modified SR header 510 will inform the load balancer 108-N that the server 110-2 (S2) has accepted the connection and will be the destination server for traffic in this connection between the client 102 and the application 508. Thus, upon receiving the packet 520, the load balancer 108-N can identify server 110-2 (S2) in the SR header 510 and create a stickiness entry 522 at the load balancer 108-N indicating that traffic associated with this connection will be handled by server 110-2 (S2). The stickiness entry 522 can prevent the load balancer 108-N from sending future packets associated with this connection session to other destination servers, and can also allow future traffic to flow from the client 102 to the server 110-2 (S2) without having to traverse the load balancer 108-N.
Referring to
Referring to
As shown in
The packet 560 will again include an SR header 510 to steer the packet 560 from the server 110-2 (S2) to the load balancer 108-N and towards the client 102. The SR header 510 will identify the server 110-2 (S2) as the sending or return segment, the load balancer 108-N as the next segment, and the client 102 as the destination.
The load balancer 108-N can use the packet 560 to manage the stickiness entry 522 previously added. For example, the load balancer 108-N can identify the server 110-2 (S2) in the SR header 510 and determine that a stickiness entry 522 exists which maps the server 110-2 (S2) to this connection between client 102 and the application 508 through server 110-2 (S2). The load balancer 108-N can also recognize that the packet 560 includes a flag or instructions to terminate the session (e.g., FIN), and in response, remove the stickiness entry 522. Thus, the packet 560 will terminate the session between the client 102 and application 508 through server 110-2 (S2) and remove the associated stickiness entry 522 at the load balancer 108-N. As a result, future connection requests or packets from client 102 received by the load balancer 108-N may undergo the SR load balancing process as previously described.
The load balancer 108-N can receive the packet 604A and process the packet 604A using SR load balancing as further described herein. The load balancer 108-N can receive the packet 604A sent by the client 102 from an edge device, such as a router 106 as illustrated in
The load balancer 108-N can receive the packet 604A and identify the virtual address 608 as the destination in the IPv6 header 404. The load balancer 108-N can use the virtual address 608 to generate a hashing entry 618. As previously explained, the hashing entry 618 can map to a bucket which contains an SR policy that identifies a set of candidate servers for flows that map to that particular bucket. In this example, the hashing entry 618 maps packet 604A with server 602-1 (S1) and server 602-2 (S2), both of which host application 508 at virtual address 608 and may be load balancing candidates for processing the packet 604A.
The load balancer 108-N will then modify the IPv6 header 404 on the packet 604A to yield header 404A, and will insert SR header 406A into the packet 604A. The SR header 406A can include the destination of the packet 604A, which in this example is the virtual address 608 of the application 508, a list of SR segments or segment list which includes the segments representing the particular SR routing path that the packet 604A should traverse, and the return or sender address, which refers to the address 610 of load balancer 108-N in this example. For load balancing purposes, the segments in the list of SR segments will include the set of candidate servers from the hashing entry 618; namely, server 110-2 (S2) and 110-1 (S1). This will allow the packet to traverse through each of the set of candidate servers, server 110-2 and server 110-1, until a server from the set of candidate servers accepts the connection or packet. As the packet traverses each of the set of candidate servers, the respective candidate server can make a load balancing determination whether to accept or reject the connection or flow, to ensure traffic is load balanced between candidate servers based on various performance conditions and considerations as explained herein.
The SR header 406A can include a respective function 614 and a respective argument 616 for any of the segments included in the SR header 406A. The function 614 can include an instruction, command, or operation (e.g., connect, force connect, listen, forward, continue, next, push, end, etc.), a flag or label (e.g., flag such as a TCP ACK flag), a parameter, state information, routing information, etc. The argument 616 can include an attribute, instruction, parameter, etc. For example, the argument 616 can include instructions or parameters which can be used by nodes to identify state information, load balancing or steering information (e.g., information mapping a flow or packet to one or more resources, etc. Non-limiting examples of function 614 can include a connect function, a force connect function, state information (e.g., ACK, SYN, SYN-ACK, FIN, etc.), an end or terminate function, etc. Additional non-limiting examples are further described below with reference to
To illustrate, the SR header 406A can include a respective function 614 for server 110-2 (S2) and server 110-1 (S1). The function 614 for server 110-2 (S2) can instruct server 110-2 (S2) to take a particular action upon receipt of the packet, and the function 614 associated with server 110-1 (S1) can instruct server 110-1 (S1) to take a particular action upon receipt of the packet. The function 614 corresponding the load balancer 108-N can also provide an instruction associated with load balancer 108-N (e.g., ACK, SYN, etc.), and the argument 616 can provide specific parameters associated with load balancer 108-N, such as steering, load balancing, or processing information for load balancer 108-N (e.g., a flow identifier and a resource identifier to map a flow to a resource on load balancer 108-N). The SR header 406A can also include other information, such as the number of segments in the list of segments (e.g., SL=2), which can correspond to the set of candidate servers in the list of segments.
In the example shown in
The IPv6 header 404A is modified to steer the packet 604A towards the destination, virtual address 608. The source address field 410 on the IPv6 header 404A can identify the client 102 (e.g., the network address associated with client 102), but the destination address field 408 on the IPv6 header 404A will identify the address 602-1 of the first candidate server from the list of segments in the SR header 406, which in this example is server 110-1 (S1). The destination address field 408 can also include a function 614 for server 110-1 (S1), which in this example can be a connect function.
The load balancer 108-N sends the packet 604A with IPv6 header 404A and SR header 406A to network address 602-1 associated with server 110-1 (S1), which is the next segment in the list of segments and the first candidate server from the set of candidate servers. Server 110-1 (S1) can receive the packet 604A and make a local decision whether to accept the connection requested by the packet (e.g., SR function 614 for server 110-1 provided in the SR header 406A) or reject the connection. The server 110-1 (S1) can make the decision to accept or reject based on one or more factors, such as a current or future load at server 110-1 (S1), available resources at server 110-1 (S1), a status of application 508 on server 110-1 (S1), performance requirements associated with the request and/or application 508, real-time conditions and statistics, active sessions or requests associated with server 110-1 (S1), historical data, etc.
In this example, server 110-1 (S1) issues a reject decision 606A and thus refuses the connection request. Server 110-1 (S1) then identifies the next segment in the list of segments on the SR header 406A, which corresponds to the next candidate server, server 110-2 (S2), and forwards the packet 604A and request along the SR path (i.e., list of segments) to server 110-2 (S2). When forwarding the packet to server 110-2 (S2), server 110-1 (S1) modifies the IPv6 header 404A according to IPv6 header 404B, and may also modify the SR header 406A according to SR header 406B.
To generate the IPv6 header 404B, server 110-1 (S1) overwrites the address 602-1 of server 110-1 (S1) in the destination field 408 of the previous IPv6 header 404A, with the address 602-2 of server 110-2 (S2), which is the next segment or candidate server as determined based on the list of segments in the SR header 406A. Server 110-1 (S1) can also include an SR function 614 in the destination field 408 directing an action by server 110-2 (S2) upon receipt of the packet. In this example, the function 614 is a connect function, which requests a connection with server 110-2 (S2). In some cases, since server 110-2 (S2) in this example is the last candidate server, the function 614 can be a force connect function to force server 110-2 (S2) to accept the connection request in order to prevent the connection from being dropped or refused by all servers.
Server 110-2 (S2) receives the packet 604A from server 110-1 (S1) and makes a decision whether to accept the connection requested by the packet (e.g., SR function 614 for server 110-2 provided in the SR header 406) or reject the connection. The server 110-2 (S2) can make the decision to accept or reject based on one or more factors, as previously explained. In this example, server 110-2 (S2) is the last candidate server and, upon receiving the packet 604A, issues an accept decision 606B to accept the connection request in function 614 of the SR header 406B. In some cases, server 110-2 (S2) will forcefully accept the connection because it is the last candidate server and/or because the function 614 will include a force connect instruction.
Server 110-2 (S2) can strip the SR header 406B from the packet 604A and modify the IPv6 header 404B according to IPv6 header 404C, to remove the address 602-2 of server 110-2 (S2) from the destination field and instead identify the virtual address 608 of the application 508 as the destination for the packet 604A. Server 110-2 (S2) will forward the packet 604A with IPv6 header 404C to the application 508 at the virtual address 608 for processing.
In some cases, the server si (iϵ{1, 2}) that has accepted the connection; namely, server 110-2 (S2), can enter a state STICKY_WAIT for the flow. While in this state, the server 110-2 (S2) will steer traffic from the application 508 towards the load balancer 108-N at load balancer address 610, so the load balancer 108-N can learn which server has accepted the connection. For this, the server 110-2 (S2) can insert an SR header in the return packet and include an SR function 614 to load balancer 108-N to create a sticky entry with an acknowledgment ACK 614 with parameters 616 (e.g., flow identifier and CPU identifier for steering the flows to a particular CPU) for flows associated with the application 508 and client 102. The SR header can include, for example, the segment list (si; lb::cs; c) in packets coming from the application 508, where cs stands for a createStickiness function and c is a connect function.
The packet 604B can include IPv6 header 404D with the virtual address 608 identified as the source and client 102 as the destination. Server 110-2 (S2) then modifies packet 604B to include SR header 406C and a modified IPv6 header 404E. The SR header 406C can identify the client 102 as the destination, load balancer 108-N as the next segment in the routing path, and server 110-2 (S2) as the source or return server or segment. The SR header 406C includes a function 614 (ACK) for load balancer 108-N and arguments 616 (e.g., CPU x and Flow y). The SR header 406C can also include function 614 (ACK) and arguments 616 (e.g., mapping the application 508 to CPU a and Flow b). The SR functions 614 and arguments 616 associated with load balancer 108-N and server 110-2 (S2) in the SR header 406C can be used by the load balancer 108-N to create a sticky entry for the connection associated with the SYN-ACK communication 620.
The modified IPv6 header 404E can identify the virtual address 608 in the source address field 410 and the load balancer 108-N (or LB address 610) in the destination address field 408, to steer the packet 604B to LB address 610 of load balancer 108-N. The destination address field 408 can include SR function 614 (ACK) and arguments 616 (e.g., mapping CPU x to Flow y at load balancer 108-N) to steer the packet 604B to load balancer 108-N according to the arguments 616.
Upon receiving packet 604B and modifying the packet 604B to include SR header 406C and IPv6 header 404E, the server 110-2 (S2) forwards packet 604B to LB address 610 of load balancer 108-N. When load balancer 108-N receives the packet 604B, it enters a sticky entry 626 which provides a STICKY_STEER state, mapping the flow associated with the SYN-ACK communication 620 (i.e., Flow b identified in SR arguments 616 of return function 614) to server 110-2 (S2) and return function 614 containing an ACK with parameters 616a and b (e.g., steering Flow b to CPU a). The load balancer 108-N strips the SR header 406C, and modifies the IPv6 header 404E in packet according to IPv6 header 404F, which identifies virtual address 608 as the source and client 102 as the destination. The load balancer 108-N then forwards packet 604B to client 102.
Based on the sticky entry 626, subsequent traffic from the client 102 to the application 508 can be sent by the load balancer 108-N using an SR header that identifies the load balancer 108-N, server 110-2 (S2) as the server for that traffic, and virtual address 608 as the destination. For example, subsequent traffic from the client 102 can be transmitted to the application 508 using the SR header (LB; S2::as; VIP), where LB stands for LB address 610 of load balancer 108-N, S2 stands for address 602-2 of server 110-2 (S2), as stands for a function ackStickiness permitting both steering of the traffic directly to the server 110-2 (S2), and acknowledging the creation of a stickiness entry.
The SR header 406D includes the virtual address 608 of application 508 as the destination, address 602-2 for server 110-2 (S2) for steering traffic directly to server 110-2 (S2) with the corresponding SR function 614 and arguments 616 (e.g., ACK CPU a Flow b) for server 110-2 (S2), and LB address 610 of load balancer 108-N and the corresponding return function 614 and arguments 616 (e.g., END CPU x Flow y), asking server 110-2 (S2) to remove stickiness. Notably, in this example, because traffic is steered directly to server 110-2 (S2), the SR header 406D does not include additional candidate servers previously included in the list of segments (i.e., server 110-1), as the traffic no longer needs to traverse those servers.
The load balancer 108-N forwards packet 604C to server 110-2 (S2) based on the IPv6 header 404G which identifies server 110-2 (S2) in the destination address field 408 as the destination server for packet. In other words, the destination address field 408 in the IPv6 header 404G can route the packet 604C to the correct server destination, server 110-2 (S2). The source address field 410 and destination address field 408 can thus contain any necessary state for the traffic, thus limiting the need for nodes to store state information for the flow.
When server 110-2 (S2) receives the packet 604C, it can strip the SR header 406D and modify the IPv6 header according to IPv6 header 404C, and forward the packet 604C to the application 508 at virtual address 608. The server 110-2 (S2) can enter a STICKY_DIRECT state 636 for the flow. The STICKY_DIRECT state 636 can instruct the server 110-2 (S2) to send any subsequent traffic for this flow directly to the client 102, which can be done without using SR or including an SR header. In this state, traffic received from the load balancer 108-N (e.g., ACK packets from the client 102) can still be sent to the ackStickiness function of the correct server. The STICKY_DIRECT state 636 can indicate to stop sending traffic to load balancer 108-N ACK packets mapping CPU a and Flow b.
The load balancer 108-N then forwards the packet 604D to server 110-2 (S2). Server 110-2 (S2) receives the packet 604D at address 602-2 of server 110-2 (S2) and forwards the ACK and data packet 604D to virtual address 608 associated with application 508.
The client 102 and application 508 on server 110-2 (S2) can continue to communicate data in direct mode as illustrated in
In this example, the server 110-2 (S2) transmits a FIN packet 604F from application 508 to the client 102. The FIN packet 604F can be routed through the load balancer 108-N to notify the load balancer 108-N that the connection or session is being terminated. To route the FIN packet 604F through the load balancer 108-N, the server 110-2 (S2) modifies the IPv6 header 404D according to IPv6 header 404H, to include the LB address 610 of the load balancer 108-N in the destination address field 408 of the packet. The server 110-2 (S2) can also include a function 614 to end the connection or session and remove the sticky entry previously created for server 110-2 (S2) at the load balancer 108-N. In this example, the function 614 is an “END” function for terminating a TCP connection, and includes the arguments 616 “x” and “y” referring to CPU x and Flow y associated with the TCP connection at load balancer 108-N.
The server 110-2 (S2) also inserts an SR header 406E which includes a list of segments as well as any functions 614 and/or arguments 616. In this example, the SR header 406E includes client 102, load balancer 108-N, and server 110-2 (S2). Load balancer 108-N is included with the function 614 and arguments 616 END:x:y, as previously explained, for terminating the TCP connection. The server 110-2 (S2) is included with a function 614 and arguments ACK: a: b, corresponding to the sticky entry for server 110-2 (S2) at the load balancer 108-N.
The load balancer 108-N receives the FIN packet 604F and forwards it to the client 102. In addition, the load balancer 108-N can remove the sticky entry for server 110-2 (S2) based on the function 614 and arguments 616 in the SR header 406E, or otherwise set a timer or condition for removal of the sticky entry. The TCP connection is terminated and future requests or flows from the client 102 to application 508 would go to the load balancer 108-N and may undergo the SR load balancing as previously described.
The SR functions 704, 706 can be encoded in the SR header of a packet to indicate an action to be taken by a node in the SR header. Moreover, the SR functions will depend on the address of the first segment in the SR list (e.g., the “sender” of the function). The SR functions can be denoted by s.ƒ.(x), where s corresponds to the physical prefix of the node receiving the packet, ƒ corresponds to a function for the node, and x corresponds to an argument for the function. Thus, when a node with a physical prefix of s receives a packet with SR header (x, . . . , s::ƒ; . . . ), the node will perform a function ƒ with argument x, which is denoted by s.ƒ.(x).
In some cases, SR functions can be implemented for failure recovery. For example, when adding or removing an instance of a load balancer, ECMP rebalancing may occur, and traffic corresponding to a given flow may be redirected to a different instance. The consistent hashing algorithm previously described can ensure that traffic corresponding to this flow is still mapped to the same SR list (e.g., s1; s2) as before the rebalancing. In order to reestablish stickiness to the correct server si (iϵ{1,2}), incoming data packets corresponding to an unknown flow are added an SR header (lb; s1::r; s2::r; VIP), where r is an SR function recoverStickiness, lb refers to the load balancer lb, VIP refers to the virtual address of an application, and s1 and s2 correspond to servers 1 and 2 respectively. When receiving a packet for this SR function, a server that had accepted the connection will re-enter the STICKY_STEER state, so as to notify the load balancer. Conversely, a server that had not accepted the connection will simply forward the packet to the next server in the SR list.
The state 808 can provide load information to the virtual switch or router 804 to help decide whether to accept or reject a connection request containing a connect function for the applications 508. When the virtual switch or router 804 receives a packet with a connect function for one of the applications 508, it can look at the respective state 808 of that application to determine whether to accept or deny the request based on the application's load, and either forward the request the application or the next server in the SR list included in the packet's SR header. Thus, the SR list in the SR header of the packet can identify multiple servers hosting an application identified in a request in order to load balance the request between the servers, and the servers can make local load balancing decisions as they receive the packet based on the state of the application. The virtual switch or router at each server can decide whether to forward the request to the application on the host server or to the next server on the SR list.
The individual servers or virtual switches or routers can implement an acceptance policy or algorithm to determine whether to accept or reject requests. An example algorithm is illustrated below:
Other algorithms are also contemplated herein. For example, some algorithms may provide static policies and other algorithms may provide dynamic policies, as further described below.
Static Policy
Let n be the number of the application's worker threads, and c a threshold parameter between 0 and n. In Algorithm 2, a policy is implemented where SR c, whereby the first server accepts the connection if and only if less than c worker threads are busy. When c=0, the requests are satisfied by the next servers in the SR list, and when c=n, the requests are satisfied by the first server.
The choice of the parameter c has a direct influence on the behavior of the system. Small values of c will yield better results under light loads, and high ones will yield better results under heavy loads. If the load pattern is known by the operator, the parameter c can be manually selected so as to maximize the load-balancing efficiency. If this is not the case, a dynamic policy can be used in order to automatically tune the value of the parameter.
Dynamic Policy
The dynamic policy SR dyn can be used when the typical request load is unknown, for example. If the rejection ratio of the connectAvail function is 0, only the first server candidates in the SR lists will serve requests; when this ratio is 1, only the next candidate(s) will serve requests. Therefore, to maximize utility of the system, a specific ratio, such as 1/2, can be set as a goal for the system. This can be done by maintaining at each server a window of last connectAvail results, and dynamically adapting the value of c so that it stays close to the ratio. An example of this procedure is illustrated below in Algorithm 3.
Having described example systems and concepts, the disclosure now turns to the method embodiment illustrated in
At step 902, a load balancer 108-1 receives a packet including a request from a source device 102 to an application associated with a virtual address. The application can be hosted by multiple application servers 110 in the network. The application servers 110 can host respective instances of the application at the virtual address. Each of the application servers 110 can have a physical network address or prefix identifying the server in the network.
At step 904, the load balancer 108-1 maps the request to a set of candidate servers 110 hosting the application associated with the virtual address. For example, the load balancer 108-1 can apply a consistent hashing mechanism to the request, as previously described, to identify a bucket for the request. The bucket can include an SR routing or load balancing policy which identifies multiple candidate servers 110 assigned to that bucket for load balancing requests that hash to that bucket. The load balancer 108-1 can identify the bucket for the request and determine which servers 110 the request should be mapped to based on the SR routing or load balancing policy corresponding to that bucket.
At step 906, the load balancer 108-1 encodes the set of candidate servers as a list of segments in a segment routing header associated with the packet. The list of segments in the segment routing header can identify the sender of the packet, the set of candidate servers, and the destination (e.g., virtual address) of the packet. The list of segments can also include one or more segment routing functions for successively steering the packet through the set of candidate servers until one of the set of candidate servers accepts the request. The segment routing functions can be associated with specific segments in the list of segments, and can indicate a function to be performed by the associated segment.
At step 908, the load balancer 108-1 can determine that a first candidate server from the set of candidate servers is a next segment in the list of segments. The load balancer 108-1 can identify the first candidate server based on the SR policy in the corresponding bucket, for example, or otherwise select a first server from the first set of candidate servers. At step 910, the load balancer 108-1 encodes the first candidate server in a destination address field on an IPv6 header of the packet. The destination address field can represent or identify the next routing segment for the packet, which can correspond to the first candidate server. The first candidate server can also be included in the list of segments in the SR header, as previously described.
At step 912, the load balancer 108-1 can forward the packet to the first candidate server. The load balancer 108-1 can forward the packet to the first candidate server based on the destination address field from step 910. The IPv6 header can be used to route the packet to the next destination, as well as maintain state information for the packet. The segment routing header can steer the packet through the candidate servers until the request is accepted by a candidate server.
As the candidate servers receive the packet, they can decide whether to accept or deny the request based on a load of the application at the server. Thus, the candidate servers can perform local load balancing decisions as they receive the packet. If a candidate server rejects the request, it can forward the packet to the next candidate server from the list of segments in the segment routing header. When forwarding the packet, the candidate server can modify the IPv6 header to include the next server in the destination address field.
If a candidate server accepts the request, it can forward the request to the application on the server and establish a connection with the source device 102. The candidate server that accepts the request can reply with a packet that identifies itself as the accepting server based on a segment routing header included in the packet. The accepting server can modify the IPv6 header of the return packet to direct the packet through the load balancer 108-1 and towards the source device 102. The packet can include in the IPv6 header and/or the segment routing header a function and any arguments for the load balancer to indicate that the server has accepted the request and establish a sticky entry at the load balancer 108-1 for subsequent communications in the session.
For example, at step 914, the first candidate server receives the packet and, at step 916, determines whether to accept or deny the request in the packet. If the first candidate server accepts the request, it processes the request at step 924. If the first candidate rejects the request, at step 918 it identifies a next candidate server listed as a next segment in the list of segments. At step 920, the first candidate server can then forward the packet to the next candidate server. The next candidate server then receives the packet and at step 922, determines whether to accept or reject the request. If the next candidate server accepts the request, it processes the packet at step 924. If the next candidate server rejects the request, it can identify a next candidate server at step 918, as previously described. The packet can continue being routed through candidate servers until a candidate server accepts the request or until the packet reaches the last candidate server. The last candidate server can be forced to accept the request to avoid the request from being rejected altogether.
The disclosure now turns to
The interfaces 1002 are typically provided as modular interface cards (sometimes referred to as “line cards”). Generally, they control the sending and receiving of data packets over the network and sometimes support other peripherals used with the network device 1000. Among the interfaces that may be provided are Ethernet interfaces, frame relay interfaces, cable interfaces, DSL interfaces, token ring interfaces, and the like. In addition, various very high-speed interfaces may be provided such as fast token ring interfaces, wireless interfaces, Ethernet interfaces, Gigabit Ethernet interfaces, ATM interfaces, HSSI interfaces, POS interfaces, FDDI interfaces, WIFI interfaces, 3G/4G/5G cellular interfaces, CAN BUS, LoRA, and the like. Generally, these interfaces may include ports appropriate for communication with the appropriate media. In some cases, they may also include an independent processor and, in some instances, volatile RAM. The independent processors may control such communications intensive tasks as packet switching, media control, signal processing, crypto processing, and management. By providing separate processors for the communications intensive tasks, these interfaces allow the master microprocessor 1004 to efficiently perform routing computations, network diagnostics, security functions, etc.
Although the system shown in
Regardless of the network device's configuration, it may employ one or more memories or memory modules (including memory 1006) configured to store program instructions for the general-purpose network operations and mechanisms for roaming, route optimization and routing functions described herein. The program instructions may control the operation of an operating system and/or one or more applications, for example. The memory or memories may also be configured to store tables such as mobility binding, registration, and association tables, etc. Memory 1006 could also hold various software containers and virtualized execution environments and data.
The network device 1000 can also include an application-specific integrated circuit (ASIC), which can be configured to perform routing and/or switching operations. The ASIC can communicate with other components in the network device 1000 via the bus 1010, to exchange data and signals and coordinate various types of operations by the network device 1000, such as routing, switching, and/or data storage operations, for example.
To enable user interaction with the computing device 1100, an input device 1145 can represent any number of input mechanisms, such as a microphone for speech, a touch-sensitive screen for gesture or graphical input, keyboard, mouse, motion input, speech and so forth. An output device 1135 can also be one or more of a number of output mechanisms known to those of skill in the art. In some instances, multimodal systems can enable a user to provide multiple types of input to communicate with the computing device 1100. The communications interface 1140 can generally govern and manage the user input and system output. There is no restriction on operating on any particular hardware arrangement and therefore the basic features here may easily be substituted for improved hardware or firmware arrangements as they are developed.
Storage device 1130 is a non-volatile memory and can be a hard disk or other types of computer readable media which can store data that are accessible by a computer, such as magnetic cassettes, flash memory cards, solid state memory devices, digital versatile disks, cartridges, random access memories (RAMs) 1125, read only memory (ROM) 1120, and hybrids thereof.
The storage device 1130 can include services 1132, 1134, 1136 for controlling the processor 1110. Other hardware or software modules are contemplated. The storage device 1130 can be connected to the system connection 1105. In one aspect, a hardware module that performs a particular function can include the software component stored in a computer-readable medium in connection with the necessary hardware components, such as the processor 1110, connection 1105, output device 1135, and so forth, to carry out the function.
For clarity of explanation, in some instances the present technology may be presented as including individual functional blocks including functional blocks comprising devices, device components, steps or routines in a method embodied in software, or combinations of hardware and software.
In some embodiments the computer-readable storage devices, mediums, and memories can include a cable or wireless signal containing a bit stream and the like. However, when mentioned, non-transitory computer-readable storage media expressly exclude media such as energy, carrier signals, electromagnetic waves, and signals per se.
Methods according to the above-described examples can be implemented using computer-executable instructions that are stored or otherwise available from computer readable media. Such instructions can comprise, for example, instructions and data which cause or otherwise configure a general purpose computer, special purpose computer, or special purpose processing device to perform a certain function or group of functions. Portions of computer resources used can be accessible over a network. The computer executable instructions may be, for example, binaries, intermediate format instructions such as assembly language, firmware, or source code. Examples of computer-readable media that may be used to store instructions, information used, and/or information created during methods according to described examples include magnetic or optical disks, flash memory, USB devices provided with non-volatile memory, networked storage devices, and so on.
Devices implementing methods according to these disclosures can comprise hardware, firmware and/or software, and can take any of a variety of form factors. Typical examples of such form factors include laptops, smart phones, small form factor personal computers, personal digital assistants, rackmount devices, standalone devices, and so on. Functionality described herein also can be embodied in peripherals or add-in cards. Such functionality can also be implemented on a circuit board among different chips or different processes executing in a single device, by way of further example.
The instructions, media for conveying such instructions, computing resources for executing them, and other structures for supporting such computing resources are means for providing the functions described in these disclosures.
Although a variety of examples and other information was used to explain aspects within the scope of the appended claims, no limitation of the claims should be implied based on particular features or arrangements in such examples, as one of ordinary skill would be able to use these examples to derive a wide variety of implementations. Further and although some subject matter may have been described in language specific to examples of structural features and/or method steps, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to these described features or acts. For example, such functionality can be distributed differently or performed in components other than those identified herein. Rather, the described features and steps are disclosed as examples of components of systems and methods within the scope of the appended claims.
This application claims priority to U.S. Provisional Patent Application No. 62/452,115, filed Jan. 30, 2017, the content of which is incorporated herein by reference in its entirety.
Number | Name | Date | Kind |
---|---|---|---|
5812773 | Norin | Sep 1998 | A |
5889896 | Meshinsky et al. | Mar 1999 | A |
6108782 | Fletcher et al. | Aug 2000 | A |
6178453 | Mattaway et al. | Jan 2001 | B1 |
6298153 | Oishi | Oct 2001 | B1 |
6343290 | Cossins et al. | Jan 2002 | B1 |
6643260 | Kloth et al. | Nov 2003 | B1 |
6683873 | Kwok et al. | Jan 2004 | B1 |
6721804 | Rubin et al. | Apr 2004 | B1 |
6733449 | Krishnamurthy et al. | May 2004 | B1 |
6735631 | Oehrke et al. | May 2004 | B1 |
6996615 | McGuire | Feb 2006 | B1 |
7054930 | Cheriton | May 2006 | B1 |
7058706 | Lyer et al. | Jun 2006 | B1 |
7062571 | Dale et al. | Jun 2006 | B1 |
7111177 | Chauvel et al. | Sep 2006 | B1 |
7212490 | Kao et al. | May 2007 | B1 |
7277948 | Igarashi et al. | Oct 2007 | B2 |
7313667 | Pullela et al. | Dec 2007 | B1 |
7379846 | Williams et al. | May 2008 | B1 |
7480672 | Hahn et al. | Jan 2009 | B2 |
7496043 | Leong et al. | Feb 2009 | B1 |
7536476 | Alleyne | May 2009 | B1 |
7567504 | Darling et al. | Jul 2009 | B2 |
7583665 | Duncan et al. | Sep 2009 | B1 |
7606147 | Luft et al. | Oct 2009 | B2 |
7644437 | Volpano | Jan 2010 | B2 |
7647594 | Togawa | Jan 2010 | B2 |
7773510 | Back et al. | Aug 2010 | B2 |
7808897 | Mehta et al. | Oct 2010 | B1 |
7881957 | Cohen et al. | Feb 2011 | B1 |
7917647 | Cooper et al. | Mar 2011 | B2 |
8010598 | Tanimoto | Aug 2011 | B2 |
8028071 | Mahalingam et al. | Sep 2011 | B1 |
8041714 | Aymeloglu et al. | Oct 2011 | B2 |
8121117 | Amdahl et al. | Feb 2012 | B1 |
8171415 | Appleyard et al. | May 2012 | B2 |
8234377 | Cohn | Jul 2012 | B2 |
8244559 | Horvitz et al. | Aug 2012 | B2 |
8250215 | Stienhans et al. | Aug 2012 | B2 |
8280880 | Aymeloglu et al. | Oct 2012 | B1 |
8284664 | Aybay et al. | Oct 2012 | B1 |
8284776 | Petersen | Oct 2012 | B2 |
8301746 | Head et al. | Oct 2012 | B2 |
8345692 | Smith | Jan 2013 | B2 |
8406141 | Couturier et al. | Mar 2013 | B1 |
8407413 | Yucel et al. | Mar 2013 | B1 |
8448171 | Donnellan et al. | May 2013 | B2 |
8477610 | Zuo et al. | Jul 2013 | B2 |
8495356 | Ashok et al. | Jul 2013 | B2 |
8495725 | Ahn | Jul 2013 | B2 |
8510469 | Portolani | Aug 2013 | B2 |
8514868 | Hill | Aug 2013 | B2 |
8532108 | Li et al. | Sep 2013 | B2 |
8533687 | Greifeneder et al. | Sep 2013 | B1 |
8547974 | Guruswamy et al. | Oct 2013 | B1 |
8560639 | Murphy et al. | Oct 2013 | B2 |
8560663 | Baucke et al. | Oct 2013 | B2 |
8589543 | Dutta et al. | Nov 2013 | B2 |
8590050 | Nagpal et al. | Nov 2013 | B2 |
8611356 | Yu et al. | Dec 2013 | B2 |
8612625 | Andreis et al. | Dec 2013 | B2 |
8630291 | Shaffer et al. | Jan 2014 | B2 |
8639787 | Lagergren et al. | Jan 2014 | B2 |
8656024 | Krishnan et al. | Feb 2014 | B2 |
8660129 | Brendel et al. | Feb 2014 | B1 |
8719804 | Jain | May 2014 | B2 |
8775576 | Hebert et al. | Jul 2014 | B2 |
8797867 | Chen et al. | Aug 2014 | B1 |
8805951 | Faibish et al. | Aug 2014 | B1 |
8850002 | Dickinson et al. | Sep 2014 | B1 |
8850182 | Fritz et al. | Sep 2014 | B1 |
8856339 | Mestery et al. | Oct 2014 | B2 |
8909928 | Ahmad et al. | Dec 2014 | B2 |
8918510 | Gmach et al. | Dec 2014 | B2 |
8924720 | Raghuram et al. | Dec 2014 | B2 |
8930747 | Levijarvi et al. | Jan 2015 | B2 |
8938775 | Roth et al. | Jan 2015 | B1 |
8959526 | Kansal et al. | Feb 2015 | B2 |
8977754 | Curry, Jr. et al. | Mar 2015 | B2 |
9009697 | Breiter et al. | Apr 2015 | B2 |
9015324 | Jackson | Apr 2015 | B2 |
9043439 | Bicket et al. | May 2015 | B2 |
9049115 | Rajendran et al. | Jun 2015 | B2 |
9063789 | Beaty et al. | Jun 2015 | B2 |
9065727 | Liu et al. | Jun 2015 | B1 |
9075649 | Bushman et al. | Jul 2015 | B1 |
9130846 | Szabo et al. | Sep 2015 | B1 |
9164795 | Vincent | Oct 2015 | B1 |
9167050 | Durazzo et al. | Oct 2015 | B2 |
9201701 | Boldyrev et al. | Dec 2015 | B2 |
9201704 | Chang et al. | Dec 2015 | B2 |
9203784 | Chang et al. | Dec 2015 | B2 |
9223634 | Chang et al. | Dec 2015 | B2 |
9244776 | Koza et al. | Jan 2016 | B2 |
9251114 | Ancin et al. | Feb 2016 | B1 |
9264478 | Hon et al. | Feb 2016 | B2 |
9294408 | Dickinson et al. | Mar 2016 | B1 |
9313048 | Chang et al. | Apr 2016 | B2 |
9361192 | Smith et al. | Jun 2016 | B2 |
9379982 | Krishna et al. | Jun 2016 | B1 |
9380075 | He et al. | Jun 2016 | B2 |
9432245 | Sorenson, III et al. | Aug 2016 | B1 |
9432294 | Sharma et al. | Aug 2016 | B1 |
9444744 | Sharma et al. | Sep 2016 | B1 |
9473365 | Melander et al. | Oct 2016 | B2 |
9503530 | Niedzielski | Nov 2016 | B1 |
9558078 | Farlee et al. | Jan 2017 | B2 |
9571570 | Mutnuru | Feb 2017 | B1 |
9613078 | Vermeulen et al. | Apr 2017 | B2 |
9628471 | Sundaram et al. | Apr 2017 | B1 |
9658876 | Chang et al. | May 2017 | B2 |
9692802 | Bicket et al. | Jun 2017 | B2 |
9736063 | Wan | Aug 2017 | B2 |
9755858 | Bagepalli et al. | Sep 2017 | B2 |
20010055303 | Horton et al. | Dec 2001 | A1 |
20020073337 | Ioele et al. | Jun 2002 | A1 |
20020143928 | Maltz et al. | Oct 2002 | A1 |
20020166117 | Abrams et al. | Nov 2002 | A1 |
20020174216 | Shorey et al. | Nov 2002 | A1 |
20030018591 | Komisky | Jan 2003 | A1 |
20030056001 | Mate et al. | Mar 2003 | A1 |
20030228585 | Inoko et al. | Dec 2003 | A1 |
20040004941 | Malan et al. | Jan 2004 | A1 |
20040034702 | He | Feb 2004 | A1 |
20040088542 | Daude et al. | May 2004 | A1 |
20040095237 | Chen et al. | May 2004 | A1 |
20040131059 | Ayyakad et al. | Jul 2004 | A1 |
20040197079 | Latvala et al. | Oct 2004 | A1 |
20040264481 | Darling et al. | Dec 2004 | A1 |
20050060418 | Sorokopud | Mar 2005 | A1 |
20050125424 | Herriott et al. | Jun 2005 | A1 |
20060062187 | Rune | Mar 2006 | A1 |
20060104286 | Cheriton | May 2006 | A1 |
20060126665 | Ward et al. | Jun 2006 | A1 |
20060146825 | Hofstaedter et al. | Jul 2006 | A1 |
20060155875 | Cheriton | Jul 2006 | A1 |
20060168338 | Bruegl et al. | Jul 2006 | A1 |
20060233106 | Achlioptas et al. | Oct 2006 | A1 |
20070174663 | Crawford et al. | Jul 2007 | A1 |
20070223487 | Kajekar et al. | Sep 2007 | A1 |
20070242830 | Conrado et al. | Oct 2007 | A1 |
20080005293 | Bhargava et al. | Jan 2008 | A1 |
20080080524 | Tsushima et al. | Apr 2008 | A1 |
20080084880 | Dharwadkar | Apr 2008 | A1 |
20080165778 | Ertemalp | Jul 2008 | A1 |
20080198752 | Fan et al. | Aug 2008 | A1 |
20080198858 | Townsley et al. | Aug 2008 | A1 |
20080201711 | Amir Husain | Aug 2008 | A1 |
20080235755 | Blaisdell et al. | Sep 2008 | A1 |
20090006527 | Gingell, Jr. et al. | Jan 2009 | A1 |
20090019367 | Cavagnari et al. | Jan 2009 | A1 |
20090031312 | Mausolf et al. | Jan 2009 | A1 |
20090083183 | Rao et al. | Mar 2009 | A1 |
20090138763 | Arnold | May 2009 | A1 |
20090177775 | Radia et al. | Jul 2009 | A1 |
20090178058 | Stillwell, III et al. | Jul 2009 | A1 |
20090182874 | Morford et al. | Jul 2009 | A1 |
20090265468 | Annambhotla et al. | Oct 2009 | A1 |
20090265753 | Anderson et al. | Oct 2009 | A1 |
20090293056 | Ferris | Nov 2009 | A1 |
20090300608 | Ferris et al. | Dec 2009 | A1 |
20090313562 | Appleyard et al. | Dec 2009 | A1 |
20090323706 | Germain et al. | Dec 2009 | A1 |
20090328031 | Pouyadou et al. | Dec 2009 | A1 |
20100036903 | Ahmad et al. | Feb 2010 | A1 |
20100042720 | Stienhans et al. | Feb 2010 | A1 |
20100061250 | Nugent | Mar 2010 | A1 |
20100115341 | Baker et al. | May 2010 | A1 |
20100131765 | Bromley et al. | May 2010 | A1 |
20100149966 | Achlioptas et al. | Jun 2010 | A1 |
20100191783 | Mason et al. | Jul 2010 | A1 |
20100192157 | Jackson et al. | Jul 2010 | A1 |
20100205601 | Abbas et al. | Aug 2010 | A1 |
20100211782 | Auradkar et al. | Aug 2010 | A1 |
20100293270 | Augenstein et al. | Nov 2010 | A1 |
20100318609 | Lahiri et al. | Dec 2010 | A1 |
20100325199 | Park et al. | Dec 2010 | A1 |
20100325441 | Laurie et al. | Dec 2010 | A1 |
20100333116 | Prahlad et al. | Dec 2010 | A1 |
20110016214 | Jackson | Jan 2011 | A1 |
20110035754 | Srinivasan | Feb 2011 | A1 |
20110055396 | Dehaan | Mar 2011 | A1 |
20110055398 | Dehaan et al. | Mar 2011 | A1 |
20110055470 | Portolani | Mar 2011 | A1 |
20110072489 | Parann-Nissany | Mar 2011 | A1 |
20110075667 | Li et al. | Mar 2011 | A1 |
20110110382 | Jabr et al. | May 2011 | A1 |
20110116443 | Yu et al. | May 2011 | A1 |
20110126099 | Anderson et al. | May 2011 | A1 |
20110138055 | Daly et al. | Jun 2011 | A1 |
20110145413 | Dawson et al. | Jun 2011 | A1 |
20110145657 | Bishop et al. | Jun 2011 | A1 |
20110173303 | Rider | Jul 2011 | A1 |
20110185063 | Head et al. | Jul 2011 | A1 |
20110185065 | Stanisic et al. | Jul 2011 | A1 |
20110206052 | Tan et al. | Aug 2011 | A1 |
20110213966 | Fu et al. | Sep 2011 | A1 |
20110219434 | Betz et al. | Sep 2011 | A1 |
20110231715 | Kunii et al. | Sep 2011 | A1 |
20110231899 | Pulier et al. | Sep 2011 | A1 |
20110239039 | Dieffenbach et al. | Sep 2011 | A1 |
20110252327 | Awasthi et al. | Oct 2011 | A1 |
20110261811 | Battestilli et al. | Oct 2011 | A1 |
20110261828 | Smith | Oct 2011 | A1 |
20110276675 | Singh et al. | Nov 2011 | A1 |
20110276951 | Jain | Nov 2011 | A1 |
20110283013 | Grosser et al. | Nov 2011 | A1 |
20110295998 | Ferris et al. | Dec 2011 | A1 |
20110305149 | Scott et al. | Dec 2011 | A1 |
20110307531 | Gaponenko et al. | Dec 2011 | A1 |
20110320870 | Kenigsberg et al. | Dec 2011 | A1 |
20120005724 | Lee | Jan 2012 | A1 |
20120036234 | Staats et al. | Feb 2012 | A1 |
20120054367 | Ramakrishnan et al. | Mar 2012 | A1 |
20120072318 | Akiyama et al. | Mar 2012 | A1 |
20120072578 | Alam | Mar 2012 | A1 |
20120072581 | Tung et al. | Mar 2012 | A1 |
20120072985 | Davne et al. | Mar 2012 | A1 |
20120072992 | Arasaratnam et al. | Mar 2012 | A1 |
20120084445 | Brock et al. | Apr 2012 | A1 |
20120084782 | Chou et al. | Apr 2012 | A1 |
20120096134 | Suit | Apr 2012 | A1 |
20120102193 | Rathore et al. | Apr 2012 | A1 |
20120102199 | Hopmann et al. | Apr 2012 | A1 |
20120131174 | Ferris et al. | May 2012 | A1 |
20120137215 | Kawara | May 2012 | A1 |
20120158967 | Sedayao et al. | Jun 2012 | A1 |
20120159097 | Jennas, II et al. | Jun 2012 | A1 |
20120167094 | Suit | Jun 2012 | A1 |
20120173710 | Rodriguez | Jul 2012 | A1 |
20120179909 | Sagi et al. | Jul 2012 | A1 |
20120180044 | Donnellan et al. | Jul 2012 | A1 |
20120182891 | Lee et al. | Jul 2012 | A1 |
20120185913 | Martinez et al. | Jul 2012 | A1 |
20120192016 | Gotesdyner et al. | Jul 2012 | A1 |
20120192075 | Ebtekar et al. | Jul 2012 | A1 |
20120201135 | Ding et al. | Aug 2012 | A1 |
20120214506 | Skaaksrud et al. | Aug 2012 | A1 |
20120222106 | Kuehl | Aug 2012 | A1 |
20120236716 | Anbazhagan et al. | Sep 2012 | A1 |
20120240113 | Hur | Sep 2012 | A1 |
20120265976 | Spiers et al. | Oct 2012 | A1 |
20120272025 | Park et al. | Oct 2012 | A1 |
20120281706 | Agarwal et al. | Nov 2012 | A1 |
20120281708 | Chauhan et al. | Nov 2012 | A1 |
20120290647 | Ellison et al. | Nov 2012 | A1 |
20120297238 | Watson et al. | Nov 2012 | A1 |
20120311106 | Morgan | Dec 2012 | A1 |
20120311568 | Jansen | Dec 2012 | A1 |
20120324092 | Brown et al. | Dec 2012 | A1 |
20120324114 | Dutta et al. | Dec 2012 | A1 |
20130003567 | Gallant et al. | Jan 2013 | A1 |
20130013248 | Brugler et al. | Jan 2013 | A1 |
20130036213 | Hasan et al. | Feb 2013 | A1 |
20130044636 | Koponen et al. | Feb 2013 | A1 |
20130066940 | Shao | Mar 2013 | A1 |
20130080509 | Wang | Mar 2013 | A1 |
20130080624 | Nagai et al. | Mar 2013 | A1 |
20130091557 | Gurrapu | Apr 2013 | A1 |
20130097601 | Podvratnik et al. | Apr 2013 | A1 |
20130104140 | Meng et al. | Apr 2013 | A1 |
20130111540 | Sabin | May 2013 | A1 |
20130117337 | Dunham | May 2013 | A1 |
20130124712 | Parker | May 2013 | A1 |
20130125124 | Kempf et al. | May 2013 | A1 |
20130138816 | Kuo et al. | May 2013 | A1 |
20130144978 | Jain et al. | Jun 2013 | A1 |
20130152076 | Patel | Jun 2013 | A1 |
20130152175 | Hromoko et al. | Jun 2013 | A1 |
20130159097 | Schory et al. | Jun 2013 | A1 |
20130159496 | Hamilton et al. | Jun 2013 | A1 |
20130160008 | Cawlfield et al. | Jun 2013 | A1 |
20130162753 | Hendrickson et al. | Jun 2013 | A1 |
20130169666 | Pacheco et al. | Jul 2013 | A1 |
20130179941 | McGloin et al. | Jul 2013 | A1 |
20130182712 | Aguayo et al. | Jul 2013 | A1 |
20130185433 | Zhu et al. | Jul 2013 | A1 |
20130191106 | Kephart et al. | Jul 2013 | A1 |
20130198374 | Zalmanovitch et al. | Aug 2013 | A1 |
20130201989 | Hu et al. | Aug 2013 | A1 |
20130204849 | Chacko | Aug 2013 | A1 |
20130232491 | Radhakrishnan et al. | Sep 2013 | A1 |
20130246588 | Borowicz et al. | Sep 2013 | A1 |
20130250770 | Zou et al. | Sep 2013 | A1 |
20130254415 | Fullen et al. | Sep 2013 | A1 |
20130262347 | Dodson | Oct 2013 | A1 |
20130283364 | Chang et al. | Oct 2013 | A1 |
20130297769 | Chang et al. | Nov 2013 | A1 |
20130318240 | Hebert et al. | Nov 2013 | A1 |
20130318546 | Kothuri et al. | Nov 2013 | A1 |
20130339949 | Spiers et al. | Dec 2013 | A1 |
20140006481 | Frey et al. | Jan 2014 | A1 |
20140006535 | Reddy | Jan 2014 | A1 |
20140006585 | Dunbar et al. | Jan 2014 | A1 |
20140040473 | Ho et al. | Feb 2014 | A1 |
20140040883 | Tompkins | Feb 2014 | A1 |
20140052877 | Mao | Feb 2014 | A1 |
20140056146 | Hu et al. | Feb 2014 | A1 |
20140059310 | Du et al. | Feb 2014 | A1 |
20140074850 | Noel et al. | Mar 2014 | A1 |
20140075048 | Yuksel et al. | Mar 2014 | A1 |
20140075108 | Dong et al. | Mar 2014 | A1 |
20140075357 | Flores et al. | Mar 2014 | A1 |
20140075501 | Srinivasan et al. | Mar 2014 | A1 |
20140089727 | Cherkasova et al. | Mar 2014 | A1 |
20140098762 | Ghai et al. | Apr 2014 | A1 |
20140108985 | Scott et al. | Apr 2014 | A1 |
20140122560 | Ramey et al. | May 2014 | A1 |
20140136779 | Guha et al. | May 2014 | A1 |
20140140211 | Chandrasekaran et al. | May 2014 | A1 |
20140141720 | Princen et al. | May 2014 | A1 |
20140156557 | Zeng et al. | Jun 2014 | A1 |
20140164486 | Ravichandran et al. | Jun 2014 | A1 |
20140188825 | Muthukkaruppan et al. | Jul 2014 | A1 |
20140189095 | Lindberg et al. | Jul 2014 | A1 |
20140189125 | Amies et al. | Jul 2014 | A1 |
20140215471 | Cherkasova | Jul 2014 | A1 |
20140222953 | Karve et al. | Aug 2014 | A1 |
20140244851 | Lee | Aug 2014 | A1 |
20140245298 | Zhou et al. | Aug 2014 | A1 |
20140269266 | Filsfils | Sep 2014 | A1 |
20140281173 | Im et al. | Sep 2014 | A1 |
20140282536 | Dave et al. | Sep 2014 | A1 |
20140282611 | Campbell et al. | Sep 2014 | A1 |
20140282889 | Ishaya et al. | Sep 2014 | A1 |
20140289200 | Kato | Sep 2014 | A1 |
20140295831 | Karra et al. | Oct 2014 | A1 |
20140297569 | Clark et al. | Oct 2014 | A1 |
20140297835 | Buys | Oct 2014 | A1 |
20140310391 | Sorensen, III et al. | Oct 2014 | A1 |
20140310417 | Sorensen, III et al. | Oct 2014 | A1 |
20140310418 | Sorensen, III et al. | Oct 2014 | A1 |
20140314078 | Jilani | Oct 2014 | A1 |
20140317261 | Shatzkamer et al. | Oct 2014 | A1 |
20140321278 | Cafarelli et al. | Oct 2014 | A1 |
20140330976 | van Bemmel | Nov 2014 | A1 |
20140330977 | van Bemmel | Nov 2014 | A1 |
20140334488 | Guichard et al. | Nov 2014 | A1 |
20140362682 | Guichard et al. | Dec 2014 | A1 |
20140365680 | van Bemmel | Dec 2014 | A1 |
20140366155 | Chang et al. | Dec 2014 | A1 |
20140369204 | Anand et al. | Dec 2014 | A1 |
20140372567 | Ganesh et al. | Dec 2014 | A1 |
20140379938 | Bosch et al. | Dec 2014 | A1 |
20150033086 | Sasturkar et al. | Jan 2015 | A1 |
20150043576 | Dixon et al. | Feb 2015 | A1 |
20150052247 | Threefoot et al. | Feb 2015 | A1 |
20150052517 | Raghu et al. | Feb 2015 | A1 |
20150058382 | St. Laurent et al. | Feb 2015 | A1 |
20150058459 | Amendjian et al. | Feb 2015 | A1 |
20150071285 | Kumar et al. | Mar 2015 | A1 |
20150085870 | Narasimha et al. | Mar 2015 | A1 |
20150089082 | Patwardhan et al. | Mar 2015 | A1 |
20150100471 | Curry, Jr. et al. | Apr 2015 | A1 |
20150103827 | Quinn et al. | Apr 2015 | A1 |
20150106802 | Ivanov et al. | Apr 2015 | A1 |
20150106805 | Melander et al. | Apr 2015 | A1 |
20150117199 | Chinnaiah Sankaran et al. | Apr 2015 | A1 |
20150117458 | Gurkan et al. | Apr 2015 | A1 |
20150120914 | Wada et al. | Apr 2015 | A1 |
20150124622 | Kovvali et al. | May 2015 | A1 |
20150138973 | Kumar et al. | May 2015 | A1 |
20150178133 | Phelan et al. | Jun 2015 | A1 |
20150189009 | van Bemmel | Jul 2015 | A1 |
20150215819 | Bosch et al. | Jul 2015 | A1 |
20150227405 | Jan et al. | Aug 2015 | A1 |
20150242204 | Hassine et al. | Aug 2015 | A1 |
20150249709 | Teng et al. | Sep 2015 | A1 |
20150256456 | Previdi et al. | Sep 2015 | A1 |
20150263901 | Kumar et al. | Sep 2015 | A1 |
20150280980 | Bitar | Oct 2015 | A1 |
20150281067 | Wu | Oct 2015 | A1 |
20150281113 | Siciliano et al. | Oct 2015 | A1 |
20150309908 | Pearson et al. | Oct 2015 | A1 |
20150319063 | Zourzouvillys et al. | Nov 2015 | A1 |
20150326524 | Tankala et al. | Nov 2015 | A1 |
20150339210 | Kopp et al. | Nov 2015 | A1 |
20150358850 | La Roche, Jr. et al. | Dec 2015 | A1 |
20150365324 | Kumar et al. | Dec 2015 | A1 |
20150373108 | Fleming et al. | Dec 2015 | A1 |
20160011925 | Kulkarni et al. | Jan 2016 | A1 |
20160013990 | Kulkarni et al. | Jan 2016 | A1 |
20160026684 | Mukherjee et al. | Jan 2016 | A1 |
20160062786 | Meng et al. | Mar 2016 | A1 |
20160094389 | Jain et al. | Mar 2016 | A1 |
20160094398 | Choudhury et al. | Mar 2016 | A1 |
20160094453 | Jain et al. | Mar 2016 | A1 |
20160094454 | Jain et al. | Mar 2016 | A1 |
20160094455 | Jain et al. | Mar 2016 | A1 |
20160094456 | Jain et al. | Mar 2016 | A1 |
20160094480 | Kulkarni et al. | Mar 2016 | A1 |
20160094643 | Jain et al. | Mar 2016 | A1 |
20160099847 | Melander et al. | Apr 2016 | A1 |
20160099853 | Nedeltchev et al. | Apr 2016 | A1 |
20160099864 | Akiya et al. | Apr 2016 | A1 |
20160105393 | Thakkar et al. | Apr 2016 | A1 |
20160127184 | Bursell | May 2016 | A1 |
20160134557 | Steinder et al. | May 2016 | A1 |
20160156708 | Jalan et al. | Jun 2016 | A1 |
20160164780 | Timmons et al. | Jun 2016 | A1 |
20160164914 | Madhav et al. | Jun 2016 | A1 |
20160182378 | Basavaraja et al. | Jun 2016 | A1 |
20160188527 | Cherian et al. | Jun 2016 | A1 |
20160234071 | Nambiar et al. | Aug 2016 | A1 |
20160239399 | Babu et al. | Aug 2016 | A1 |
20160253078 | Ebtekar et al. | Sep 2016 | A1 |
20160254968 | Ebtekar et al. | Sep 2016 | A1 |
20160261564 | Foxhoven et al. | Sep 2016 | A1 |
20160277368 | Narayanaswamy et al. | Sep 2016 | A1 |
20170005948 | Melander et al. | Jan 2017 | A1 |
20170024260 | Chandrasekaran et al. | Jan 2017 | A1 |
20170026224 | Townsley et al. | Jan 2017 | A1 |
20170026294 | Basavaraja et al. | Jan 2017 | A1 |
20170026470 | Bhargava et al. | Jan 2017 | A1 |
20170041342 | Efremov et al. | Feb 2017 | A1 |
20170054659 | Ergin et al. | Feb 2017 | A1 |
20170097841 | Chang et al. | Apr 2017 | A1 |
20170099188 | Chang et al. | Apr 2017 | A1 |
20170104755 | Arregoces et al. | Apr 2017 | A1 |
20170147297 | Krishnamurthy et al. | May 2017 | A1 |
20170149878 | Mutnuru | May 2017 | A1 |
20170163531 | Kumar et al. | Jun 2017 | A1 |
20170171158 | Hoy et al. | Jun 2017 | A1 |
20170264663 | Bicket et al. | Sep 2017 | A1 |
20170317932 | Paramasivam | Nov 2017 | A1 |
20170339070 | Chang et al. | Nov 2017 | A1 |
Number | Date | Country |
---|---|---|
101719930 | Jun 2010 | CN |
101394360 | Jul 2011 | CN |
102164091 | Aug 2011 | CN |
104320342 | Jan 2015 | CN |
105740084 | Jul 2016 | CN |
2228719 | Sep 2010 | EP |
2439637 | Apr 2012 | EP |
2645253 | Nov 2014 | EP |
10-2015-0070676 | May 2015 | KR |
M394537 | Dec 2010 | TW |
WO 2009155574 | Dec 2009 | WO |
WO 2010030915 | Mar 2010 | WO |
WO 2013158707 | Oct 2013 | WO |
Entry |
---|
Extended European Search Report from the European Patent Office, dated May 11, 2018, 13 pages, for the corresponding European Patent Application No. EP 18152894.4. |
Eisenbud, Daniel E., et al., “Maglev: A Fast and Reliable Software Network Load Balancer,” 13th USENIX Symposium on Networked Systems Design and Implementation (NSDI '16), Mar. 16-18, 2016, 14 pages, Santa Clara, USA. |
Patel, Parveen, et al., “Ananta: Cloud Scale Load Balancing,” SIGCOMM '13, Aug. 12-16, 2013, pp. 207-218, Hong Kong, China. |
Amedro, Brian, et al., “An Efficient Framework for Running Applications on Clusters, Grids and Cloud,” 2010, 17 pages. |
Author Unknown, “5 Benefits of a Storage Gateway in the Cloud,” Blog, TwinStrata, Inc., Jul. 25, 2012, XP055141645, 4 pages, https://web.archive.org/web/20120725092619/http://blog.twinstrata.com/2012/07/10//5-benefits-of-a-storage-gateway-in-the-cloud. |
Author Unknown, “Joint Cisco and VMWare Solution for Optimizing Virtual Desktop Delivery: Data Center 3.0: Solutions to Accelerate Data Center Virtualization,” Cisco Systems, Inc. and VMware, Inc., Sep. 2008, 10 pages. |
Author Unknown, “A Look at DeltaCloud: The Multi-Cloud API,” Feb. 17, 2012, 4 pages. |
Author Unknown, “About Deltacloud,” Apache Software Foundation, Aug. 18, 2013, 1 page. |
Author Unknown, “Architecture for Managing Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS0102, Jun. 18, 2010, 57 pages. |
Author Unknown, “Cloud Infrastructure Management Interface—Common Information Model (CIMI-CIM),” Document No. DSP0264, Version 1.0.0, Dec. 14, 2012, 21 pages. |
Author Unknown, “Cloud Infrastructure Management Interface (CIMI) Primer,” Document No. DSP2027, Version 1.0.1, Sep. 12, 2012, 30 pages. |
Author Unknown, “cloudControl Documentation,” Aug. 25, 2013, 14 pages. |
Author Unknown, “Interoperable Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS0101, Nov. 11, 2009, 21 pages. |
Author Unknown, “Microsoft Cloud Edge Gateway (MCE) Series Appliance,” Iron Networks, Inc., 2014, 4 pages. |
Author Unknown, “Open Data Center Alliance Usage: Virtual Machine (VM) Interoperability in a Hybrid Cloud Environment Rev. 1.2,” Open Data Center Alliance, Inc., 2013, 18 pages. |
Author Unknown, “Real-Time Performance Monitoring on Juniper Networks Devices, Tips and Tools for Assessing and Analyzing Network Efficiency,” Juniper Networks, Inc., May 2010, 35 pages. |
Author Unknown, “Use Cases and Interactions for Managing Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-ISO0103, Jun. 16, 2010, 75 pages. |
Author Unknown, “Apache Ambari Meetup What's New,” Hortonworks Inc., Sep. 2013, 28 pages. |
Author Unknown, “Introduction,” Apache Ambari project, Apache Software Foundation, 2014, 1 page. |
Baker, F., “Requirements for IP Version 4 Routers,” Jun. 1995, 175 pages, Network Working Group, Cisco Systems. |
Beyer, Steffen, “Module “Data::Locations?!”,” YAPC::Europe, London, UK,ICA, Sep. 22-24, 2000, XP002742700, 15 pages. |
Blanchet, M., “A Flexible Method for Managing the Assignment of Bits of an IPv6 Address Block,” Apr. 2003, 8 pages, Network Working Group, Viagnie. |
Borovick, Lucinda, et al., “Architecting the Network for the Cloud,” IDC White Paper, Jan. 2011, 8 pages. |
Bosch, Greg, “Virtualization,” last modified Apr. 2012 by B. Davison, 33 pages. |
Broadcasters Audience Research Board, “What's Next,” http://lwww.barb.co.uk/whats-next, accessed Jul. 22, 2015, 2 pages. |
Cisco Systems, Inc. “Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers,” Cisco White Paper, Apr. 2011, 36 pages, http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.pdf. |
Cisco Systems, Inc., “Cisco Unified Network Services: Overcome Obstacles to Cloud-Ready Deployments,” Cisco White Paper, Jan. 2011, 6 pages. |
Cisco Systems, Inc., “Cisco Intercloud Fabric: Hybrid Cloud with Choice, Consistency, Control and Compliance,” Dec. 10, 2014, 22 pages. |
Cisco Technology, Inc., “Cisco Expands Videoscape TV Platform Into the Cloud,” Jan. 6, 2014, Las Vegas, Nevada, Press Release, 3 pages. |
Citrix, “Citrix StoreFront 2.0” White Paper, Proof of Concept Implementation Guide, Citrix Systems, Inc., 2013, 48 pages. |
Citrix, “CloudBridge for Microsoft Azure Deployment Guide,” 30 pages. |
Citrix, “Deployment Practices and Guidelines for NetScaler 10.5 on Amazon Web Services,” White Paper, citrix.com, 2014, 14 pages. |
CSS Corp, “Enterprise Cloud Gateway (ECG)—Policy driven framework for managing multi-cloud environments,” original published on or about Feb. 11, 2012; 1 page; http://www.css-cloud.com/platform/enterprise-cloud-gateway.php. |
Fang K., “LISP MAC-EID-TO-RLOC Mapping (LISP based L2VPN),” Network Working Group, Internet Draft, Cisco Systems, Jan. 2012, 12 pages. |
Ford, Bryan, et al., Peer-to-Peer Communication Across Network Address Translators, In USENIX Annual Technical Conference, 2005, pp. 179-192. |
Gedymin, Adam, “Cloud Computing with an emphasis on Google App Engine,” Sep. 2011, 146 pages. |
Good, Nathan A., “Use Apache Deltacloud to administer multiple instances with a single API,” Dec. 17, 2012, 7 pages. |
Herry, William, “Keep it Simple, Stupid: OpenStack nova-scheduler and its algorithm”, May 12, 2012, IBM, 12 pages. |
Hewlett-Packard Company, “Virtual context management on network devices”, Research Disclosure, vol. 564, No. 60, Apr. 1, 2011, Mason Publications, Hampshire, GB, Apr. 1, 2011, 524. |
Juniper Networks, Inc., “Recreating Real Application Traffic in Junosphere Lab,” Solution Brief, Dec. 2011, 3 pages. |
Kenhui, “Musings on Cloud Computing and IT-as-a-Service: [Updated for Havana] Openstack Computer for VSphere Admins, Part 2: Nova-Scheduler and DRS”, Jun. 26, 2013, Cloud Architect Musings, 12 pages. |
Kolyshkin, Kirill, “Virtualization in Linux,” Sep. 1, 2006, XP055141648, 5 pages, https://web.archive.org/web/20070120205111/http://download.openvz.org/doc/openvz-intro.pdf. |
Kumar, S., et al., “Infrastructure Service Forwarding for NSH,” Service Function Chaining Internet Draft, draft-kumar-sfc-nsh-forwarding-00, Dec. 5, 2015, 10 pages. |
Kunz, Thomas, et al., “OmniCloud—The Secure and Flexible Use of Cloud Storage Services,” 2014, 30 pages. |
Lerach, S.R.O., “Golem,” http://www.lerach.cz/en/products/golem, accessed Jul. 22, 2015, 2 pages. |
Linthicum, David, “VM Import could be a game changer for hybrid clouds”, InfoWorld, Dec. 23, 2010, 4 pages. |
Logan, Marcus, “Hybrid Cloud Application Architecture for Elastic Java-Based Web Applications,” F5 Deployment Guide Version 1.1, 2016, 65 pages. |
Lynch, Sean, “Monitoring cache with Claspin” Facebook Engineering, Sep. 19, 2012, 5 pages. |
Meireles, Fernando Miguel Dias, “Integrated Management of Cloud Computing Resources,” 2013-2014, 286 pages. |
Meraki, “meraki releases industry's first cloud-managed routers,” Jan. 13, 2011, 2 pages. |
Mu, Shuai, et al., “uLibCloud: Providing High Available and Uniform Accessing to Multiple Cloud Storages,” 2012 IEEE, 8 pages. |
Naik, Vijay K., et al., “Harmony: A Desktop Grid for Delivering Enterprise Computations,” Grid Computing, 2003, Fourth International Workshop on Proceedings, Nov. 17, 2003, pp. 1-11. |
Nair, Srijith K. et al., “Towards Secure Cloud Bursting, Brokerage and Aggregation,” 2012, 8 pages, www.flexiant.com. |
Nielsen, “SimMetry Audience Measurement—Technology,” http://www.nielsen-admosphere.eu/products-and-services/simmetry-audience-measurement-technology/, accessed Jul. 22, 2015, 6 pages. |
Nielsen, “Television,” http://www.nielsen.com/us/en/solutions/measurement/television.html, accessed Jul. 22, 2015, 4 pages. |
Open Stack, “Filter Scheduler,” updated Dec. 17, 2017, 5 pages, accessed on Dec. 18, 2017, https://docs.openstack.org/nova/latest/user/filter-scheduler.html. |
Quinn, P., et al., “Network Service Header,” Internet Engineering Task Force Draft, Jul. 3, 2014, 27 pages. |
Quinn, P., et al., “Service Function Chaining (SFC) Architecture,” Network Working Group, Internet Draft, draft-quinn-sfc-arch-03.txt, Jan. 22, 2014, 21 pages. |
Rabadan, J., et al., “Operational Aspects of Proxy-ARP/ND in EVPN Networks,” BESS Worksgroup Internet Draft, draft-snr-bess-evpn-proxy-arp-nd-02, Oct. 6, 2015, 22 pages. |
Saidi, Ali, et al., “Performance Validation of Network-Intensive Workloads on a Full-System Simulator,” Interaction between Operating System and Computer Architecture Workshop, (IOSCA 2005), Austin, Texas, Oct. 2005, 10 pages. |
Shunra, “Shunra for HP Software; Enabling Confidence in Application Performance Before Deployment,” 2010, 2 pages. |
Son, Jungmin, “Automatic decision system for efficient resource selection and allocation in inter-clouds,” Jun. 2013, 35 pages. |
Sun, Aobing, et al., “IaaS Public Cloud Computing Platform Scheduling Model and Optimization Analysis,” Int. J. Communications, Network and System Sciences, 2011, 4, 803-811, 9 pages. |
Szymaniak, Michal, et al., “Latency-Driven Replica Placement”, vol. 47 No. 8, IPSJ Journal, Aug. 2006, 12 pages. |
Toews, Everett, “Introduction to Apache jclouds,” Apr. 7, 2014, 23 pages. |
Von Laszewski, Gregor, et al., “Design of a Dynamic Provisioning System for a Federated Cloud and Bare-metal Environment,” 2012, 8 pages. |
Wikipedia, “Filter (software)”, Wikipedia, Feb. 8, 2014, 2 pages, https://en.wikipedia.org/w/index.php?title=Filter_%28software%29&oldid=594544359. |
Wikipedia; “Pipeline (Unix)”, Wikipedia, May 4, 2014, 4 pages, https://en.wikipedia.org/w/index.php?title=Pipeline2/028Unix%29&oldid=606980114. |
Ye, Xianglong, et al., “A Novel Blocks Placement Strategy for Hadoop,” 2012 IEEE/ACTS 11th International Conference on Computer and Information Science, 2012 IEEE, 5 pages. |
Number | Date | Country | |
---|---|---|---|
20180219783 A1 | Aug 2018 | US |
Number | Date | Country | |
---|---|---|---|
62452115 | Jan 2017 | US |