Load distribution in data networks

Information

  • Patent Grant
  • 9705800
  • Patent Number
    9,705,800
  • Date Filed
    Tuesday, September 17, 2013
    12 years ago
  • Date Issued
    Tuesday, July 11, 2017
    8 years ago
Abstract
Provided are methods and systems for load distribution in a data network. A method for load distribution in the data network may comprise retrieving network data associated with the data network and service node data associated with one or more service nodes. The method may further comprise analyzing the retrieved network data and service node data. Based on the analysis, a service policy may be generated. Upon receiving one or more service requests, the one or more service requests may be distributed among the service nodes according to the service policy.
Description
TECHNICAL FIELD

This disclosure relates generally to data processing, and, more specifically, to load distribution in software driven networks (SDN).


BACKGROUND

The approaches described in this section could be pursued but are not necessarily approaches that have previously been conceived or pursued. Therefore, unless otherwise indicated, it should not be assumed that any of the approaches described in this section qualify as prior art merely by virtue of their inclusion in this section.


In a typical load balancing scenario, a service hosted by a group of servers is front-ended by a load balancer (LB) (also referred to herein as a LB device) which represents this service to clients as a virtual service. Clients needing the service can address their packets to the virtual service using a virtual Internet Protocol (IP) address and a virtual port. For example, www.example.com:80 is a service that is being load balanced and there is a group of servers that host this service. An LB can be configured with a virtual IP (VIP) e.g. 100.100.100.1 and virtual port (VPort) e.g. Port 80, which, in turn, are mapped to the IP addresses and port numbers of the servers handling this service. The Domain Name Service (DNS) server handling this domain can be configured to send packets to the VIP and VPort associated with this LB.


The LB will inspect incoming packets and based on the policies/algorithms will choose a particular server from the group of servers, modify the packet if necessary and forward the packet towards the server. On the way back from the server (optional), the LB will get the packet, modify the packet if necessary and forward the packet back towards the client.


There is often a need to scale up or scale down the LB service. For example, the LB service may need to be scaled up or down based on time of the day e.g. days vs. nights, weekdays vs. weekends. For example, fixed-interval software updates may result in predictable network congestions and, therefore, the LB service may need to be scaled up to handle the flash crowd phenomenon and scaled down subsequently. The popularity of the service may necessitate the need to scale up the service. These situations can be handled within the LB when the performance characteristics of the LB device can handle the scaling adjustments needed.


However, in many cases the performance needs to be increased to beyond what a single load balancing device can handle. Typical approaches for this include physical chassis-based solutions, where cards can be inserted and removed to handle the service requirements. These approaches have many disadvantages which include the need to pre-provision space, power, and price for a chassis for future needs. Additionally, a single chassis can only scale up to the maximum capacity of its cards. To cure this deficiency, one can attempt to stack LB devices and send traffic between the devices as needed. However, this approach may also have disadvantages such as the link between the devices becoming the bottleneck, and increased latencies as packets have to traverse multiple LBs to reach the entity that will eventually handle the requests.


Another existing solution is to add multiple LB devices, create individual VIPs on each device for the same servers in the backend and use the DNS to distribute the load among them. When another LB needs to be added, another entry is added to the DNS database. When an LB needs to be removed, the corresponding entry is removed from the DNS database. However, this approach has the following issues. DNS records are cached and hence addition/removal of LBs may take time before they are effective. This is especially problematic when an LB is removed as data directed to the LB can be lost. The distribution across the LBs is very coarse and not traffic aware e.g. one LB may be overwhelmed while other LBs may be idle, some clients may be heavier users and end up sending requests to the same LB, and so forth. The distribution between LBs may not be LB capacity aware e.g. LB1 may be a much more powerful device than LB2. Thus, the existing solutions to solve this problem all have their disadvantages.


SUMMARY

This summary is provided to introduce a selection of concepts in a simplified form that are further described in the Detailed Description below. This summary is not intended to identify key features or essential features of the claimed subject matter, nor is it intended to be used as an aid in determining the scope of the claimed subject matter.


The present disclosure is related to approaches for load distribution in a data network. Specifically, a method for load distribution in a data network may comprise retrieving network data associated with the data network and service node data associated with one or more service nodes. The method may further comprise analyzing the retrieved network data and service node data. Based on the analysis, a service policy may be generated. The generated service policy may be provided to devices associated with the data network. Upon receiving one or more service requests, the one or more service requests may be distributed among the one or more service nodes according to the service policy.


According to another approach of the present disclosure, there is provided a system for load distribution in a data network. The system may comprise a cluster master. The cluster master may be configured to retrieve and analyze network data associated with the data network and service node data associated with one or more service nodes. Based on the analysis, the cluster master may generate a service policy and provide the generated service policy to devices associated with the data network. The system may further comprise a traffic classification engine. The traffic classification engine may be configured to receive the service policy from the cluster master. Upon receiving one or more service requests, the traffic classification engine may distribute the service requests among one or more service nodes according to the service policy. Furthermore, the system may comprise the one or more service nodes. The service nodes may be configured to receive the service policy from the cluster master and receive the one or more service requests from the traffic classification engine. The service nodes may process the one or more service requests according to the service policy.


In another approach of the present disclosure, the cluster master may reside within the traffic classification engine layer or the service node layer. Additionally the traffic classification engine may, in turn, reside within the service node layer.


In further example embodiments of the present disclosure, the method steps are stored on a machine-readable medium comprising instructions, which when implemented by one or more processors perform the recited steps. In yet further example embodiments, hardware systems, or devices can be adapted to perform the recited steps. Other features, examples, and embodiments are described below.





BRIEF DESCRIPTION OF THE DRAWINGS

Embodiments are illustrated by way of example, and not by limitation, in the figures of the accompanying drawings, in which like references indicate similar elements and in which:



FIG. 1 shows an environment within which a method and a system for service load distribution in a data network can be implemented, according to an example embodiment.



FIG. 2 is a process flow diagram showing a method for service load distribution in a data network, according to an example embodiment.



FIG. 3 is a block diagram showing various modules of a system for service load distribution in a data network, according to an example embodiment.



FIG. 4 is a scheme for service load distribution of a data network, according to an example embodiment.



FIG. 5 shows a diagrammatic representation of a computing device for a machine in the example electronic form of a computer system, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein can be executed.





DETAILED DESCRIPTION

The following detailed description includes references to the accompanying drawings, which form a part of the detailed description. The drawings show illustrations in accordance with example embodiments. These example embodiments, which are also referred to herein as “examples,” are described in enough detail to enable those skilled in the art to practice the present subject matter. The embodiments can be combined, other embodiments can be utilized, or structural, logical, and electrical changes can be made without departing from the scope of what is claimed. The following detailed description is therefore not to be taken in a limiting sense, and the scope is defined by the appended claims and their equivalents. In this document, the terms “a” and “an” are used, as is common in patent documents, to include one or more than one. In this document, the term “or” is used to refer to a nonexclusive “or,” such that “A or B” includes “A but not B,” “B but not A,” and “A and B,” unless otherwise indicated.


The present disclosure relates to efficient ways of implementing load balancing by having an LB service and a data network, such as an SDN, work together to deliver packets to multiple LBs. Because the SDN is aware of the requirements of the LB service, it can efficiently distribute traffic to the LBs. This approach allows the same virtual service to be hosted on multiple LBs, without needing any DNS changes. There are minimal to no latency impacts since the packets are delivered directly to the LB that handles them. Fine-grained distribution of flows to the LBs can be achieved based on the LBs capabilities, network capabilities and current loads. This approach also supports scaling up/down of services as needed as well as facilitating management and operation of the load balancing by administrators.


In some example embodiments, a protocol can be running between the LBs and SDN elements that make the SDN and the LB exchange information on how to distribute traffic, dynamically inserting forwarding rules to influence packet path selection on devices that are capable of performing such forwarding. Algorithms controlled by the LBs can be implemented on routers, switches and other devices to influence traffic steering.


To ensure distribution of data flow in a network of heterogeneous switches from multiple vendors, additional technologies can be used. These technologies may utilize a controller to compute paths between sources and destination and program the flows on the network devices between the sources and destination. This property can be leveraged to program flows intelligently to scale out/in the load balancing implementation in the network based on demand, availability of resources, and so forth.


As LBs activate and deactivate based on the requirements, such as for example, load increases or configurations changes, the LBs can update the controller and have the controller make changes to the flows in the network. In an example embodiment, in case there is no appropriate external controller, the LB may itself act as the controller and may directly make changes to the flows in the network. Similarly, the controller can work with the LB to inform the LB of network loads and other inputs, health of devices in the network, and so forth to assist the LBs with making decisions.


The techniques of the embodiments disclosed herein may be implemented using a variety of technologies. For example, the methods described herein may be implemented in software executing on a computer system or in hardware utilizing either a combination of microprocessors or other specially designed application-specific integrated circuits (ASICs), programmable logic devices, or various combinations thereof. In particular, the methods described herein may be implemented by a series of computer-executable instructions residing on a storage medium such as a disk drive, or computer-readable medium. It should be noted that methods disclosed herein can be implemented by a computer (e.g., a desktop computer, a tablet computer, a laptop computer, and a server), game console, handheld gaming device, cellular phone, smart phone, smart television system, and so forth.


As outlined in the summary, the embodiments of the present disclosure refer to load distribution in an SDN. As referred herein, an SDN is a network that allows managing network services through abstraction of lower level functionality by decoupling a control plane that makes decisions as to where a service request, e.g. traffic from a client to a server, is to be sent from a data plane responsible for forwarding the service request to the selected destination based on the decision of the control plane. The data plane may reside on the network hardware or software devices and the control plane may be executed through the software. Such separation of the planes of the SDN may enable network virtualization, since commands or control rules may be executed by the software. The SDN may be configured to deliver client service requests or host service requests to virtual machines and physical devices, e.g. servers.


The control plane may be configured to ascertain the health and other data associated with the SDN and virtual machines, for example, by real time data network applets. The control plane may leverage the real time data network applets and other means to gauge service responsiveness on the virtual machines, monitor the total connections, central processing unit utilization, and memory as well as network connectivity on the virtual machines and use that information to influence the load distribution decisions and forwarding on the data plane.


Furthermore, the control plane may comprise a service policy engine configured to analyze the collected health data and, based on the analysis, translate the health data into service policies. The service policies may include policies to enhance, i.e. scale out or scale down, the number of virtual machines, traffic classification engines, or backend servers, to remedy or repair failed virtual machines, to secure virtual machines, to introduce new virtual machines, to remove virtual machines, and so forth. Effectively, the service policies may influence load balancing, high availability as well as programming the SDN network. Therefore, based on the service policies, the SDN may scale out or scale down the use of traffic distribution devices through periods or dynamic loads and thereby optimize network resources respectively. The traffic distribution devices may be scaled out or scaled down based, for example, on time of the day. Furthermore, fixed-interval software updates may result in predictable network congestions and the load balancing may need to be scaled out to handle the flash crowd phenomenon and scaled down subsequently. Additionally, the popularity of the service may necessitate the need to scale up the service.


The SDN may comprise a controller enabling programmable control of routing the service requests, such as network traffic, without requiring physical access to network switches. In other words, the controller may be configured to steer the traffic across the network to server pools or virtual machine pools. The service policy engine may communicate with the controller and inject the service policies into the controller. The controller, in turn, may steer traffic across the network devices, such as server or virtual machines, according to the service policies.


In an example embodiment, the service data plane of the SDN may be configured as an application delivery controller (ADC). The control plane may communicate with the ADC by managing a set of service policies mapping service requests to one or more ADCs. The ADC may then relay the service requests to a backend server over a physical or logical network, namely over a server pool or a virtual machine pool.


Referring now to the drawings, FIG. 1 illustrates an environment 100 within which a method and a system for load distribution in an SDN can be implemented. The environment 100 may include a network 110, a client 120, a system 300 for load distribution, and servers 140. The client 120 may include a user or a host associated with the network 110.


The network 110 may include the Internet or any other network capable of communicating data between devices. Suitable networks may include or interface with any one or more of, for instance, a local intranet, a PAN (Personal Area Network), a LAN (Local Area Network), a WAN (Wide Area Network), a MAN (Metropolitan Area Network), a virtual private network (VPN), a storage area network (SAN), a frame relay connection, an Advanced Intelligent Network (AIN) connection, a synchronous optical network (SONET) connection, a digital T1, T3, E1 or E3 line, Digital Data Service (DDS) connection, DSL (Digital Subscriber Line) connection, an Ethernet connection, an ISDN (Integrated Services Digital Network) line, a dial-up port such as a V.90, V.34 or V.34bis analog modem connection, a cable modem, an ATM (Asynchronous Transfer Mode) connection, or an FDDI (Fiber Distributed Data Interface) or CDDI (Copper Distributed Data Interface) connection. Furthermore, communications may also include links to any of a variety of wireless networks, including WAP (Wireless Application Protocol), GPRS (General Packet Radio Service), GSM (Global System for Mobile Communication), CDMA (Code Division Multiple Access) or TDMA (Time Division Multiple Access), cellular phone networks, GPS (Global Positioning System), CDPD (cellular digital packet data), RIM (Research in Motion, Limited) duplex paging network, Bluetooth radio, or an IEEE 802.11-based radio frequency network. The network 110 can further include or interface with any one or more of an RS-232 serial connection, an IEEE-1394 (Firewire) connection, a Fiber Channel connection, an IrDA (infrared) port, a SCSI (Small Computer Systems Interface) connection, a USB (Universal Serial Bus) connection or other wired or wireless, digital or analog interface or connection, mesh or Digi® networking. The network 110 may include a network of data processing nodes that are interconnected for the purpose of data communication. The network 110 may include an SDN. The SDN may include one or more of the above network types. Generally the network 110 may include a number of similar or dissimilar devices connected together by a transport medium enabling communication between the devices by using a predefined protocol. Those skilled in the art will recognize that the present disclosure may be practiced within a variety of network configuration environments and on a variety of computing devices.


As shown in FIG. 1, the client 120 may send service requests 150 to servers 140, which may be backend servers. The service requests 150 may include an HTTP request, a video streaming request, a file download request, a transaction request, a conference request, and so forth. The servers 140 may include a web server, a wireless application server, an interactive television server, and so forth. The system 300 for load distribution may balance flow of the service requests 150 among traffic forwarding devices of the network 110. The system 300 for load distribution may analyze the flow of the service requests 150 and determine which and how many traffic forwarding devices of the network 110 are needed to deliver the service requests 150 to the servers 140.



FIG. 2 is a process flow diagram showing a method 200 for service load distribution in an SDN, according to an example embodiment. The method 200 may be performed by processing logic that may comprise hardware (e.g., decision making logic, dedicated logic, programmable logic, and microcode), software (such as software running on a general-purpose computer system or a dedicated machine), or a combination of both.


The method 200 may commence with receiving network data associated with the SDN at operation 202. In an example embodiment, the network data associated with the SDN may be indicative of the health of the SDN, processing unit utilization, number of total connections, memory status, network connectivity, backend server capacity, and so forth. At operation 204, the method may comprise retrieving service node data associated with one or more service nodes. In an example embodiment, the one or more service nodes may include a virtual machine and a physical device. The service node data may be indicative of health of the node, dynamic state, node processing unit utilization, node memory status, network connectivity of the service nodes, responsiveness of the one or more service nodes, and so forth.


At operation 206, the retrieved network data and service node data may be analyzed. Based on the analysis, a service policy may be generated at operation 208. The service policy may include one or more of the following: a service address, a service node address, a traffic distribution policy, a service node load policy, and so forth. The method may further comprise providing, i.e. pushing, the generated service policy to devices associated with the data network. The devices associated with the data network may include the service nodes and traffic classification engines.


The method 200 may continue with providing the generated service policy to the devices associated with the data network at operation 210. Upon receiving one or more service requests at operation 212, the one or more service requests may be distributed among the one or more service nodes according to the service policy at operation 214. In an example embodiment, the method 200 may comprise developing, based on the analysis, a further service policy. The further service policy may be associated with scaling out, scaling down, remedying, removing services associated with the one or more service nodes, introducing a new service associated with the one or more service nodes, and so forth.


In an example embodiment, the method 200 may comprise performing health checks of a backend server by the devices associated with the data network. In further example embodiments, the method 200 may comprise scaling up or scaling down service nodes, backend servers, traffic classification engines, and cluster masters in a graceful manner with minimum to no disruption to the traffic flow. Furthermore, the services may be scaled up or scaled down in a graceful manner with minimum to no disruption to traffic flow. In the event of scaling up or scaling down of the service node, the service requests may be redirected to one or more other service nodes to continue processing data associated with the service request. In further example embodiments, the method 200 may comprise optimizing reverse traffic from backend servers to the service node handling the service.



FIG. 3 shows a block diagram illustrating various modules of an exemplary system 300 for service load distribution in an SDN. The system 300 may comprise a cluster of devices eligible as a cluster master. The system 300 may comprise a cluster master 305 elected from these devices. The cluster master 305 may be configured to keep track of the SDN and retrieve network data associated with the SDN. In an example embodiment, the network data may include one or more of the following: a number of total connections, processing unit utilization, a memory status, a network connectivity, backend server capacity, and so forth. Furthermore, the cluster master 305 may be configured to keep track of the service nodes and retrieve service node data associated with one or more service nodes. The service node data may include one or more of the following: health, dynamic state, responsiveness of the one or more service nodes, and so forth. In other words, the cluster master 305 may keep track of the health of the network and each service node associated with the system 300. The cluster master 305 may analyze the retrieved network data and service node data. Based on the analysis, the cluster master 305 may generate a service policy. The service policy may include a service address, a service node address, a service node load policy, a traffic distribution policy also referred to as a traffic mapping, and so forth. The cluster master 305 may provide the generated service policy to the devices associated with the data network, such as service nodes and traffic classification engines.


In an example embodiment, the cluster master 305 may be further configured to develop, based on the analysis, a further service policy. The further policy may be associated with scaling out, scaling down, remedying, removing devices, such as service nodes, traffic classification engines, backend servers and so forth, introducing new service nodes, traffic classification engines, backend servers, and so forth.


In an example embodiment, the cluster master 305 may be further configured to facilitate an application programmable interface (not shown) for a network administrator to enable the network administrator to develop, based on the analysis, a further service policy using the retrieved network data and service node data and analytics. This approach may allow application developers to write directly to the network without having to manage or understand all the underlying complexities and subsystems that compose the network.


In a further example embodiment, the cluster master 305 may include a backup unit (not shown) configured to replace the cluster master in case of a failure of the cluster master 305.


The system 300 may comprise a traffic classification engine 310. The traffic classification engine 310 may be implemented as one or more software modules, hardware modules, or a combination of hardware and software. The traffic classification engine 310 may include an engine configured to monitor data flows and classify the data flows based on one or more attributes associated with the data flows, e.g. uniform resource locators (URLs), IP addresses, port numbers, and so forth. Each resulting data flow class can be specifically designed to implement a certain service for a client. In an example embodiment, the cluster master 305 may send a service policy to the traffic classification engine 310. The traffic classification engine 310 may be configured to receive the service policy from the cluster master 305. Furthermore, the traffic classification engine 310 may be configured to receive one or more incoming service requests 315, e.g. incoming data traffic from routers or switches (not shown). Typically, the data traffic may be distributed from the routers or switches to each of the traffic classification engines 310 evenly. In an example embodiment, a router may perform a simple equal-cost multi-path (ECMP) routing to distribute the traffic equally to all the traffic classification engines 310. The traffic classification engines 310 may distribute the one or more service requests among one or more service nodes 320 according to the service policy. The traffic may be distributed to the one or more service nodes 320 in an asymmetric fashion. The traffic to the service nodes 320 may be direct or through a tunnel (IP-in-IP or other overlay techniques). The traffic classification engine 310 may be stateless or stateful, may act on a per packet basis, and direct each packet of the traffic to the corresponding service node 320. When there is a change in the service nodes state, the cluster master 305 may send a new service policy, such as a new traffic map, to the traffic classification engine 310.


The system 300 may comprise the one or more service nodes 320. The one or more service nodes 320 may include a virtual machine or a physical device that may serve a corresponding virtual service to which the traffic is directed. The cluster master 305 may send the service policy to the service nodes 320. The service nodes 320 may be configured to receive the service policy from the cluster master 305. Furthermore, the service nodes 320 may receive, based on the service policy, the one or more service requests 315 from the traffic classification engine 310. The one or more service nodes 320 may process the received one or more service requests 315 according to the service policy. The processing of the one or more service requests 315 may include forwarding the one or more service requests 315 to one or more backend destination servers (not shown). Each service node 320 may serve one or more virtual services. The service nodes 320 may be configured to send the service node data to the cluster master 305.


According to further example embodiment, an existing service node may redirect packets for existing flows to another service node if it is the new owner of the flow based on the redistribution of flows to the service nodes. In addition, a service node taking over the flow may redirect packets to the service node that was the old owner for the flows under consideration, for cases where the flow state needs to be pinned down to the old owner to maintain continuity of service.


Furthermore, in an example embodiment, the cluster master 305 may perform a periodic health check on the service nodes 320 and update the service nodes 320 with a service policy, such as a traffic map. When there is a change in the traffic assignment and a packet of the data traffic in a flow reaches a service node 320, the service node 320 may redirect the packet to another service node. Redirection may be direct or through a tunnel (e.g. IP-in-IP or other overlay techniques).


It should be noted that if each of the devices of the cluster in the network performs the backend server health check, it may lead to a large number of health check packets sent to an individual device. In view of this, the backend server health check may be performed by a few devices of the cluster and the result may be shared among the rest of the devices in the cluster. The health check may include a service check and a connectivity check. The service check may include determining whether the application or the backend server is still available. As already mentioned above, not every device in the cluster needs to perform this check. The check may be performed by a few devices and the result propagated to the rest of the devices in the cluster. A connectivity check includes determining whether the service node can reach the backend server. The path to the backend server may be specific to the service node, so this may not be distributed across service nodes, and each device in the cluster may perform its own check.


In an example embodiment, the system 300 may comprise an orchestrator 325. The orchestrator 325 may be configured to bring up and bring down the service nodes 320, the traffic classification engines 310, and backend servers. The orchestrator 325 may detect presence of the one or more service nodes 320 and transmit data associated with the presence of the one or more service nodes 320 to the cluster master 305. Furthermore, the orchestrator 325 may inform the cluster master 305 of bringing up or bringing down the service nodes 320. The orchestrator 325 may communicate with the cluster master 305 and the service nodes 320 using one or more Application Programming Interfaces (APIs).


In an example embodiment, a centralized or distributed network database may be used and shared among all devices in the cluster of the system 300, such as the cluster master, the traffic classification engine, and other service nodes. Each device may connect to the network database and update tables according to its role. Relevant database records may be replicated to the devices that are part of the cluster. The distributed network database may be used to store configurations and states of the devices, e.g. to store data associated with the cluster master, the traffic classification engine, the one or more service nodes, backend servers, and service policy data. The data stored in the distributed network database may include the network data and the service node data. The distributed network database may include tables with information concerning service types, availability of resources, traffic classification, network maps, and so forth. The cluster master 305 may be responsible for maintaining the distributed network database and replicating it to devices. The network database may be replicated to the traffic classification engines 310 and the service nodes 320. In an example embodiment, the network database may internally replicate data across the participant nodes.


In the embodiments described above, the system 300 may comprise a dedicated cluster master 305, dedicated traffic classification engines 310, and dedicated service nodes 320. In other words, specific devices may be responsible for acting as the cluster master, the traffic classification engine, and the service node. In further example embodiments, the system 300 may include no dedicated devices acting as a cluster master. In this case, the cluster master functionality may be provided by either the traffic classification engines or by the service nodes. Thus, one of the traffic classification engines or one of the service nodes may act as the cluster master. In case the traffic classification engine or service node acting as the cluster master fails, another traffic classification engine or service node may be elected as the cluster master. The traffic classification engines and the service nodes not elected as the cluster master may be configured as backup cluster masters and synchronized with the current cluster master. In an example embodiment, the cluster master may consist of multiple active devices which can act as a single master by sharing duties among the devices.


In further example embodiments, the system 300 may comprise a dedicated cluster master with no dedicated devices acting as traffic classification engines. In this case, the traffic classification may be performed by one of upstream routers or switches. Also, the service nodes may distribute the traffic among themselves. In an example embodiment, the cluster master and the service nodes may be configured to act as a traffic classification engine.


In further example embodiments, the system 300 may include no devices acting as cluster masters and traffic classification engines. In this case, one of the service nodes may also act as the cluster master. The traffic classification may be done by upstream routers or switches. The cluster master may program the upstream routers with the traffic mapping. Additionally, the service nodes may distribute the traffic among themselves.


It should be noted that bringing up new service nodes when the load increases and bringing down the service nodes when the load becomes normal may be performed gracefully, without affecting existing data traffic and connections. When the service node comes up, the distribution of traffic may change from distribution to n service nodes to distribution to (n+1) service nodes.


When a service node is about to be brought down, the traffic coming to this service node may be redirected to other service nodes. For this purpose, a redirection policy associated with the service node about to be brought down may be created by the cluster master and sent to the traffic distribution engine and/or the service nodes. Upon receiving the redirection policy, the traffic distribution engine may direct the traffic to another service node.


In an example embodiment, the system 300 may comprise, for example, a plurality of traffic distribution engines, each of which may serve traffic to multiple services. Each of the traffic distribution engines may communicate with a different set of service nodes. In case one of the traffic distribution engines fails, another traffic distribution engines may be configured to substitute the failed traffic distribution engine and to distribute the traffic of the failed traffic distribution engines to the corresponding service nodes. Therefore, each of the traffic distribution engines may comprise addresses of all service nodes and not only addresses associated with the service nodes currently in communication with the traffic distribution engine.



FIG. 4 shows a diagram 400 for load distribution of an SDN. As shown, diagram 400 includes client 120, e.g., a computer connected to a network 110. The network 110 may include the SDN. The client 120 may send one or more service requests for services provided by one or more servers of the virtual machine/server pool 405 (also referred to herein as virtual machine/physical server pool 405). These servers may include web servers, wireless application servers, interactive television servers, and so forth. These service requests can be load balanced by a system for load distribution described above. In other words, the service requests of the client 120 may be intelligently distributed among virtual machine/ physical server pool 405 of the SDN.


The system for load distribution may include a service control plane 410. The service control plane 410 may include one or more data network applets 415, for example, a real time data network applet. The data network applets 415 may check the health and other data associated with the SDN and the virtual machines in the virtual machine/server pool 405. For example, the data network applets 415 may determine responsiveness of the virtual machines in the virtual machine/server pool 405. Furthermore, the data network applets 415 may monitor the total connections, central processing unit utilization, memory, network connectivity on the virtual machines in the virtual machine/server pool 405, and so forth. Therefore, the data network applets 415 may retrieve fine-grained, comprehensive information concerning the SDN and virtual machine service infrastructure.


The retrieved health data may be transmitted to a service policy engine 420. In example embodiments, a cluster master 305 described above may act as the service policy engine 420. The service policy engine 420 may analyze the health data and, upon the analysis, generate a set of service policies 430 to scale up/down the services, to secure services, to introduce new services, to remove services, to remedy or repair failed devices, and so forth. The system for load distribution may further comprise an orchestrator (not shown) configured to bring up more virtual machines on demand. Therefore, in order to deliver a smooth client experience, the service requests may be load balanced across the virtual machines in the virtual machine/server pool 405.


Furthermore, the service policies 430 may be provided to an SDN controller 435. The SDN controller 435, in turn, may steer service requests, i.e. data traffic, across the network devices in the SDN. Effectively, these policies may influence load balancing, high availability as well as programming the SDN network to scale up or scale down services.


Generally speaking, by unlocking the data associated with the network, service nodes and the server/virtual machines from inside the network, transforming the data into relevant information and the service policies 430, and then presenting the service policies 430 to the SDN controller 435 for configuring the SDN 110, the described infrastructure may enable feedback loops between underlying infrastructure and applications to improve network optimization and application responsiveness.


The service control plane 410 working in conjunction with the SDN controller 435 and the service policy engine 420 may create a number of deployment possibilities, which may offer an array of basic and advanced load distribution features. In particular, to provide a simple load balancing functionality, the SDN controller 435 and the service control plane 410 may provide some load balancing of their own by leveraging the capabilities of the SDN 110 or, alternatively, work in conjunction with an ADC 440, also referred to as a service data plane included in the SDN 110 to optionally provide advanced additional functionality.


In an example embodiment, when the service control plane 410 may be standalone, i.e. without an ADC 440, virtual machines in the virtual machine/server pool 405, when scaled up, may be programmed with a virtual Internet Protocol (VIP) address on a loopback interface of the virtual machines in the virtual machine/server pool 405. Thus, for data traffic in need of simple service fulfillment, the service control plane 410 may establish simple policies for distributing service requests and instruct the SDN controller 435 to program network devices to distribute the service requests directly to different virtual machines/physical servers in the virtual machine/server pool 405. This step may be performed over a physical or logical network.


In an example embodiment, when the service control plane 410 may work in cooperation with an ADC 440, for more sophisticated ADC functionality typically offered by a purpose built ADC device, the service control plane 410 may manage a set of service policy mapping service requests to one or more ADC devices. The service control plane 410 may instruct the SDN controller 435 to program network devices such that the service requests, i.e. the traffic, may reach one or more ADCs 440. The ADC 440 then may relay the service request to a backend server over a physical or logical network.


In the described embodiment several traffic flow scenarios may exist. In an example embodiment, only forward traffic may go through the ADC 440. If a simple functionality of the ADC 440, e.g. rate limiting, bandwidth limiting, scripting policies, is required, the forward traffic may traverse the ADC 440. The loopback interface on the servers may be programmed with the VIP address. Response traffic from the virtual machines in the virtual machine/server pool 405 may bypass the ADC 440.


In a further example embodiment, forward and reverse traffic may traverse the ADC 440. In the ADC 440 providing a more advanced functionality, e.g. transmission control protocol (TCP) flow optimization, secure sockets layer (SSL) decryption, compression, caching and so forth, is required, the service control plane 410 may need to ensure both the forward and reverse traffic traverses through the ADC 440 by appropriately programming the SDN 110.



FIG. 5 shows a diagrammatic representation of a machine in the example electronic form of a computer system 500, within which a set of instructions for causing the machine to perform any one or more of the methodologies discussed herein may be executed. In various example embodiments, the machine operates as a standalone device or may be connected (e.g., networked) to other machines. In a networked deployment, the machine may operate in the capacity of a server or a client machine in a server-client network environment, or as a peer machine in a peer-to-peer (or distributed) network environment. The machine may be a PC, a tablet PC, a set-top box (STB), a cellular telephone, a portable music player (e.g., a portable hard drive audio device such as an Moving Picture Experts Group Audio Layer 3 (MP3) player), a web appliance, a network router, switch or bridge, or any machine capable of executing a set of instructions (sequential or otherwise) that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute a set (or multiple sets) of instructions to perform any one or more of the methodologies discussed herein.


The example computer system 500 includes a processor or multiple processors 502 (e.g., a central processing unit (CPU), a graphics processing unit (GPU), or both), a main memory 504 and a static memory 506, which communicate with each other via a bus 508. The computer system 500 may further include a video display unit 510 (e.g., a liquid crystal display (LCD) or a cathode ray tube (CRT)). The computer system 500 may also include an alphanumeric input device 512 (e.g., a keyboard), a cursor control device 514 (e.g., a mouse), a disk drive unit 516, a signal generation device 518 (e.g., a speaker), and a network interface device 520.


The disk drive unit 516 includes a non-transitory computer-readable medium 522, on which is stored one or more sets of instructions and data structures (e.g., instructions 524) embodying or utilized by any one or more of the methodologies or functions described herein. The instructions 524 may also reside, completely or at least partially, within the main memory 504 and/or within the processors 502 during execution thereof by the computer system 500. The main memory 504 and the processors 502 may also constitute machine-readable media.


The instructions 524 may further be transmitted or received over a network 526 via the network interface device 520 utilizing any one of a number of well-known transfer protocols (e.g., Hyper Text Transfer Protocol (HTTP)).


While the computer-readable medium 522 is shown in an example embodiment to be a single medium, the term “computer-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database and/or associated caches and servers) that store the one or more sets of instructions. The term “computer-readable medium” shall also be taken to include any medium that is capable of storing, encoding, or carrying a set of instructions for execution by the machine and that causes the machine to perform any one or more of the methodologies of the present application, or that is capable of storing, encoding, or carrying data structures utilized by or associated with such a set of instructions. The term “computer-readable medium” shall accordingly be taken to include, but not be limited to, solid-state memories, optical and magnetic media, and carrier wave signals. Such media may also include, without limitation, hard disks, floppy disks, flash memory cards, digital video disks, random access memory (RAMs), read only memory (ROMs), and the like.


The example embodiments described herein can be implemented in an operating environment comprising computer-executable instructions (e.g., software) installed on a computer, in hardware, or in a combination of software and hardware. The computer-executable instructions can be written in a computer programming language or can be embodied in firmware logic. If written in a programming language conforming to a recognized standard, such instructions can be executed on a variety of hardware platforms and for interfaces to a variety of operating systems. Although not limited thereto, computer software programs for implementing the present method can be written in any number of suitable programming languages such as, for example, Hypertext Markup Language (HTML), Dynamic HTML, Extensible Markup Language (XML), Extensible Stylesheet Language (XSL), Document Style Semantics and Specification Language (DSSSL), Cascading Style Sheets (CSS), Synchronized Multimedia Integration Language (SMIL), Wireless Markup Language (WML), Java™, Jini™, C, C++, Perl, UNIX Shell, Visual Basic or Visual Basic Script, Virtual Reality Markup Language (VRML), ColdFusion™ or other compilers, assemblers, interpreters or other computer languages or platforms.


Thus, methods and systems for load distribution in an SDN are disclosed. Although embodiments have been described with reference to specific example embodiments, it will be evident that various modifications and changes can be made to these example embodiments without departing from the broader spirit and scope of the present application. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.

Claims
  • 1. A method for service load distribution in a data network, the method comprising: generating a service policy for distributing network service requests among a plurality of load balancing devices in the data network, wherein the plurality of load balancing devices includes a plurality of routers, a plurality of traffic classification engines, and a plurality of service nodes;providing the service policy to the plurality of load balancing devices associated with the data network;receiving, by the plurality of routers, one or more service requests;distributing, by the plurality of routers, the one or more service requests evenly to one or more of the plurality of traffic classification engines;distributing, by the one or more of the plurality of traffic classification engines, the one or more service requests asymmetrically to one or more of the plurality of service nodes according to the service policy; anddistributing, by the one or more of the plurality of service nodes, the one or more service requests to one or more backend servers according to the service policy, wherein the service policy is generated based on at least a responsiveness of each of the plurality of service nodes and reachability of the one or more backend servers to the one or more of the plurality of service nodes.
  • 2. The method of claim 1, further comprising: retrieving network data associated with the data network;retrieving service node data associated with one or more service nodes; andanalyzing the network data and the service node data, wherein the service policy is generated based on the analysis of the network data and the service node data.
  • 3. The method of claim 2, further comprising pushing the service policy to the plurality of load balancing devices associated with the data network.
  • 4. The method of claim 2, wherein the network data includes at least one of health of a service node, a number of total connections, processing unit utilization, a memory status, backend server capacity, and network connectivity.
  • 5. The method of claim 2, wherein the service node data includes at least one of a dynamic state, node processing unit utilization, a node memory status, and the responsiveness of each of the plurality of service nodes.
  • 6. The method of claim 2, further comprising developing a further service policy based on the analysis, wherein the further service policy is associated with scaling up, scaling down, remedying, removing or introducing one or more new service nodes, traffic classification engines or backend servers.
  • 7. The method of claim 6, wherein when a service node of the plurality of service nodes is scaled up or scaled down, the one or more service requests are redirected to the one or more the plurality of service nodes to continue processing data associated with the one or more service requests.
  • 8. The method of claim 2, wherein the plurality of service nodes includes a virtual machine and a physical device.
  • 9. The method of claim 2, further comprising facilitating reverse traffic from the backend servers to the one or more of the plurality of service nodes.
  • 10. The method of claim 1, wherein the data network includes a software driven network (SDN), the SDN comprising at least one of the plurality of traffic classification engines, the plurality of service nodes, and application delivery controllers.
  • 11. The method of claim 1, wherein the service policy includes at least one of a traffic distribution policy and a service node load policy.
  • 12. The method of claim 1, further comprising: facilitating an application programmable interface to a network administrator; anddeveloping a further service policy based on the analysis, the further service policy being for the network administrator via the application programmable interface.
  • 13. The method of claim 1, further comprising performing a health check of the one or more backend servers by the plurality of load balancing devices associated with the data network.
  • 14. The method of claim 1, further comprising scaling up the plurality of service nodes, the one or more backend servers, the plurality of traffic classification engines, cluster masters, and other devices in the SDN network while reducing disruption to traffic flow.
  • 15. The method of claim 1, further comprising scaling down the plurality of service nodes, the one or more backend servers, the plurality of traffic classification engines, cluster masters, and other devices in the SDN network while reducing disruption to traffic flow.
  • 16. The method of claim 1, further comprising scaling up or scaling down services while reducing disruption to traffic flow.
  • 17. The method of claim 1, further comprising: detecting the one or more of the plurality of service nodes; andtransmitting data associated with the one or more of the plurality of service nodes to the cluster master.
  • 18. The method of claim 1, further comprising: storing data associated with at least one of the cluster master, the plurality of traffic classification engines, the plurality of service nodes, the one or more backend servers, and service policies; andsharing the data among the cluster master, the plurality of traffic classification engines, and the plurality of service nodes.
  • 19. A system for service load distribution in a data network, the system comprising: a cluster master that: retrieves network data associated with the data network;retrieves service node data associated with one or more service nodes;analyzes the network data and the service node data;based on the analysis, generates a service policy for distributing network service requests among a plurality of load balancing devices in the data network, wherein the plurality of load balancing devices includes a plurality of routers, a plurality of traffic classification engines, and a plurality of service nodes; andprovides the service policy to the plurality of load balancing devices associated with the data network;the plurality of routers that: receive one or more service requests; anddistribute the one or more service requests evenly to one or more of the plurality of traffic classification engines;the plurality of traffic classification engines, wherein at least the one or more of the plurality of traffic classification engines are configured to: receive the service policy;receive the one or more service requests from the plurality of routers; anddistribute the one or more service requests asymmetrically to one or more of the plurality of service nodes according to the service policy; andthe plurality of service nodes, wherein at least the one or more of the plurality of service nodes are configured to distribute the one or more service requests to one or more backend servers according to the service policy;wherein the service policy is generated based on at least a responsiveness of each of the plurality of service nodes and reachability of the one or more backend servers to the one or more of the plurality of service nodes.
  • 20. The system of claim 19, wherein the plurality of service nodes and plurality of the traffic classification engines acts as a cluster master, and wherein the cluster master and the plurality of service nodes act as a traffic classification engine.
  • 21. The system of claim 19, further comprising an orchestrator that: detects the one or more of the plurality of service nodes; andtransmits data associated with the one or more of the plurality of service nodes to the cluster master.
  • 22. The system of claim 19, further comprising a network database that: stores data associated with at least one of the cluster master, the plurality of traffic classification engines, the plurality of service nodes, the one or more backend servers, and service policies; andallows the data to be shared among the cluster master, the plurality of traffic classification engine, and the plurality of service nodes.
  • 23. The system of claim 22, wherein the stored data includes the network data and the service node data.
  • 24. The system of claim 19, wherein each of the plurality of service nodes is further configured to send the service node data to the cluster master.
  • 25. The system of claim 19, wherein the network data includes at least one of a number of total connections, processing unit utilization, a memory status, and network connectivity.
  • 26. The system of claim 19, wherein the service node data includes at least one of a health of the service node, a dynamic state, and the responsiveness of each of the plurality of service nodes.
  • 27. The system of claim 19, wherein the service policy includes at least one of service address, service node address, a traffic distribution policy, and a service node load policy.
  • 28. The system of claim 19, wherein the cluster master includes a backup unit configured to replace the cluster master in case of a failure of the cluster master.
  • 29. The system of claim 19, wherein the cluster master performs at least one of: developing a further service policy based on the analysis, wherein the further service policy is associated with scaling down, scaling up, remedying, removing services associated with the plurality of service nodes, or introducing a new service associated with the plurality of service nodes; andfacilitating provision of an application programmable interface to a network administrator to enable the network administrator to develop, based on the analysis, a further service policy.
  • 30. The method of claim 29, wherein when services associated with a service node of the plurality of service nodes are scaled up or scaled down, the one or more service requests are redirected to the one or more of the plurality of service nodes to continue processing data associated with the one or more service requests.
  • 31. The system of claim 19, further comprising performing a health check of the one or more backend servers by the plurality of load balancing devices associated with the data network.
  • 32. The system of claim 19, further comprising scaling up the plurality of service nodes, the one or more backend servers, the plurality of traffic classification engines, cluster masters, and other devices in the SDN network while reducing disruption to the traffic flow.
  • 33. The system of claim 19, further comprising scaling down the plurality of service nodes, the one or more backend servers, the plurality of traffic classification engines, cluster masters, and other devices in the SDN network while reducing disruption to traffic flow.
  • 34. The system of claim 19, further comprising scaling up and scaling down services while reducing disruption to traffic flow.
  • 35. The method of claim 19, further comprising facilitating reverse traffic from the one or more backend servers to the one or more of the plurality of service nodes.
  • 36. A non-transitory processor-readable medium having instructions stored thereon, which when executed by one or more processors, cause the one or more processors to perform the following operations: retrieving network data associated with a data network;retrieving service node data associated with a plurality of service nodes;analyzing the network data and the service node data;based on the analyzed network data and service node data, generating a service policy for distributing network service requests among a plurality of load balancing devices in the data network, wherein the plurality of load balancing devices includes a plurality of routers, a plurality of traffic classification engines, and the plurality of service nodes;providing the service policy to plurality of load balancing devices associated with the data network;receiving, by the plurality of routers, one or more service requests;distributing, by the plurality of routers, the one or more service requests evenly to one or more of the plurality of traffic classification engines;distributing, by the one or more of the plurality of traffic classification engines, the one or more service requests asymmetrically to one or more of the plurality of service nodes according to the service policy;distributing, by the one or more of the plurality of service nodes, the one or more service requests to one or more backend servers according to the service policy, wherein the service policy is generated based on at least a responsiveness of each of the plurality of service nodes and reachability of the one or more backend servers to the one or more of the plurality of service nodes;developing a first further service policy based on the analysis, wherein the first further service policy is associated with scaling up, scaling down, remedying, or removing services associated with the plurality of service nodes, and introducing a new service associated with the plurality of service nodes;facilitating providing an application programmable interface to a network administrator;developing a second further service policy based on the analysis by the network administrator via the application programmable interface;performing a health check of the one or more backend servers by the plurality of load balancing devices associated with the data network;scaling up or scaling down at least one of the plurality of service nodes, the one or more backend servers, the plurality of traffic classification engines, and cluster masters while reducing disruption to traffic flow;scaling up or scaling down services while reducing disruption to the traffic flow;facilitating reverse traffic from the one or more backend servers to the one or more of the plurality of service nodes; andredirecting the one or more service requests to the one or more of the plurality of service nodes to continue processing data associated with the one or more service requests when at least one service node of the plurality of service nodes has been scaled up or down.
CROSS-REFERENCE TO RELATED APPLICATIONS

This patent application claims the priority benefit of U.S. provisional patent application No. 61/705,618, filed Sep. 25, 2012, the disclosure of which is incorporated herein by reference.

US Referenced Citations (387)
Number Name Date Kind
5218602 Grant et al. Jun 1993 A
5774660 Brendel et al. Jun 1998 A
5935207 Logue et al. Aug 1999 A
5958053 Denker Sep 1999 A
5995981 Wikstrom Nov 1999 A
6003069 Cavill Dec 1999 A
6047268 Bartoli et al. Apr 2000 A
6131163 Wiegel Oct 2000 A
6219706 Fan et al. Apr 2001 B1
6259705 Takahashi et al. Jul 2001 B1
6321338 Porras et al. Nov 2001 B1
6374300 Masters Apr 2002 B2
6459682 Ellesson Oct 2002 B1
6587866 Modi et al. Jul 2003 B1
6748414 Bournas Jun 2004 B1
6772334 Glawitsch Aug 2004 B1
6779017 Lamberton et al. Aug 2004 B1
6779033 Watson et al. Aug 2004 B1
6952728 Alles et al. Oct 2005 B1
7010605 Dharmarajan Mar 2006 B1
7013482 Krumel Mar 2006 B1
7058718 Fontes et al. Jun 2006 B2
7069438 Balabine et al. Jun 2006 B2
7076555 Orman et al. Jul 2006 B1
7143087 Fairweather Nov 2006 B2
7181524 Lele Feb 2007 B1
7218722 Turner et al. May 2007 B1
7228359 Monteiro Jun 2007 B1
7234161 Maufer et al. Jun 2007 B1
7236457 Joe Jun 2007 B2
7254133 Govindarajan et al. Aug 2007 B2
7269850 Govindarajan et al. Sep 2007 B2
7277963 Dolson et al. Oct 2007 B2
7301899 Goldstone Nov 2007 B2
7308499 Chavez Dec 2007 B2
7310686 Uysal Dec 2007 B2
7328267 Bashyam et al. Feb 2008 B1
7334232 Jacobs et al. Feb 2008 B2
7337241 Boucher et al. Feb 2008 B2
7343399 Hayball et al. Mar 2008 B2
7349970 Clement et al. Mar 2008 B2
7370353 Yang May 2008 B2
7391725 Huitema et al. Jun 2008 B2
7398317 Chen et al. Jul 2008 B2
7423977 Joshi Sep 2008 B1
7430755 Hughes et al. Sep 2008 B1
7463648 Eppstein et al. Dec 2008 B1
7467202 Savchuk Dec 2008 B2
7472190 Robinson Dec 2008 B2
7492766 Cabeca et al. Feb 2009 B2
7506360 Wilkinson et al. Mar 2009 B1
7509369 Tormasov Mar 2009 B1
7512980 Copeland et al. Mar 2009 B2
7533409 Keane et al. May 2009 B2
7552323 Shay Jun 2009 B2
7584262 Wang et al. Sep 2009 B1
7584301 Joshi Sep 2009 B1
7590736 Hydrie et al. Sep 2009 B2
7613193 Swami et al. Nov 2009 B2
7613822 Joy et al. Nov 2009 B2
7673072 Boucher et al. Mar 2010 B2
7675854 Chen et al. Mar 2010 B2
7703102 Eppstein et al. Apr 2010 B1
7707295 Szeto et al. Apr 2010 B1
7711790 Barrett et al. May 2010 B1
7739395 Parlamas et al. Jun 2010 B1
7747748 Allen Jun 2010 B2
7751409 Carolan Jul 2010 B1
7765328 Bryers Jul 2010 B2
7792113 Foschiano et al. Sep 2010 B1
7808994 Vinokour et al. Oct 2010 B1
7826487 Mukerji et al. Nov 2010 B1
7881215 Daigle et al. Feb 2011 B1
7948952 Hurtta et al. May 2011 B2
7970934 Patel Jun 2011 B1
7983258 Ruben et al. Jul 2011 B1
7990847 Leroy et al. Aug 2011 B1
7991859 Miller et al. Aug 2011 B1
8019870 Eppstein et al. Sep 2011 B1
8032634 Eppstein et al. Oct 2011 B1
8090866 Bashyam et al. Jan 2012 B1
8122116 Matsunaga et al. Feb 2012 B2
8179809 Eppstein et al. May 2012 B1
8185651 Moran et al. May 2012 B2
8191106 Choyi et al. May 2012 B2
8224971 Miller et al. Jul 2012 B1
8266235 Jalan et al. Sep 2012 B2
8296434 Miller et al. Oct 2012 B1
8312507 Chen et al. Nov 2012 B2
8379515 Mukerji Feb 2013 B1
8499093 Grosser et al. Jul 2013 B2
8539075 Bali et al. Sep 2013 B2
8554929 Szeto et al. Oct 2013 B1
8560693 Wang et al. Oct 2013 B1
8584199 Chen et al. Nov 2013 B1
8595791 Chen et al. Nov 2013 B1
RE44701 Chen et al. Jan 2014 E
8675488 Sidebottom et al. Mar 2014 B1
8681610 Mukerji Mar 2014 B1
8750164 Casado Jun 2014 B2
8782221 Han Jul 2014 B2
8813180 Chen et al. Aug 2014 B1
8826372 Chen et al. Sep 2014 B1
8879427 Krumel Nov 2014 B2
8885463 Medved et al. Nov 2014 B1
8897154 Jalan et al. Nov 2014 B2
8965957 Barros Feb 2015 B2
8977749 Han Mar 2015 B1
8990262 Chen et al. Mar 2015 B2
9094364 Jalan et al. Jul 2015 B2
9106561 Jalan et al. Aug 2015 B2
9154577 Jalan et al. Oct 2015 B2
9154584 Han Oct 2015 B1
9215275 Kannan et al. Dec 2015 B2
9219751 Chen et al. Dec 2015 B1
9253152 Chen et al. Feb 2016 B1
9270705 Chen et al. Feb 2016 B1
9270774 Jalan et al. Feb 2016 B2
9338225 Jalan et al. May 2016 B2
9350744 Chen et al. May 2016 B2
9356910 Chen et al. May 2016 B2
9386088 Zheng et al. Jul 2016 B2
9609052 Jalan et al. Mar 2017 B2
20010049741 Skene et al. Dec 2001 A1
20020032777 Kawata et al. Mar 2002 A1
20020078164 Reinschmidt Jun 2002 A1
20020091844 Craft et al. Jul 2002 A1
20020103916 Chen et al. Aug 2002 A1
20020133491 Sim et al. Sep 2002 A1
20020138618 Szabo Sep 2002 A1
20020143991 Chow et al. Oct 2002 A1
20020178259 Doyle et al. Nov 2002 A1
20020191575 Kalavade et al. Dec 2002 A1
20020194335 Maynard Dec 2002 A1
20020194350 Lu et al. Dec 2002 A1
20030009591 Hayball et al. Jan 2003 A1
20030014544 Pettey Jan 2003 A1
20030023711 Parmar et al. Jan 2003 A1
20030023873 Ben-Itzhak Jan 2003 A1
20030035409 Wang et al. Feb 2003 A1
20030035420 Niu Feb 2003 A1
20030065762 Stolorz et al. Apr 2003 A1
20030091028 Chang et al. May 2003 A1
20030131245 Linderman Jul 2003 A1
20030135625 Fontes et al. Jul 2003 A1
20030195962 Kikuchi et al. Oct 2003 A1
20040062246 Boucher et al. Apr 2004 A1
20040073703 Boucher et al. Apr 2004 A1
20040078419 Ferrari et al. Apr 2004 A1
20040078480 Boucher et al. Apr 2004 A1
20040111516 Cain Jun 2004 A1
20040128312 Shalabi et al. Jul 2004 A1
20040139057 Hirata et al. Jul 2004 A1
20040139108 Tang et al. Jul 2004 A1
20040141005 Banatwala et al. Jul 2004 A1
20040143599 Shalabi et al. Jul 2004 A1
20040187032 Gels et al. Sep 2004 A1
20040199616 Karhu Oct 2004 A1
20040199646 Susai et al. Oct 2004 A1
20040202182 Lund et al. Oct 2004 A1
20040210623 Hydrie et al. Oct 2004 A1
20040210663 Phillips et al. Oct 2004 A1
20040213158 Collett et al. Oct 2004 A1
20040268358 Darling et al. Dec 2004 A1
20050005207 Herneque Jan 2005 A1
20050009520 Herrero et al. Jan 2005 A1
20050021848 Jorgenson Jan 2005 A1
20050027862 Nguyen et al. Feb 2005 A1
20050036501 Chung et al. Feb 2005 A1
20050036511 Baratakke et al. Feb 2005 A1
20050044270 Grove et al. Feb 2005 A1
20050074013 Hershey et al. Apr 2005 A1
20050080890 Yang et al. Apr 2005 A1
20050102400 Nakahara et al. May 2005 A1
20050125276 Rusu Jun 2005 A1
20050163073 Heller et al. Jul 2005 A1
20050198335 Brown et al. Sep 2005 A1
20050213586 Cyganski et al. Sep 2005 A1
20050240989 Kim et al. Oct 2005 A1
20050249225 Singhal Nov 2005 A1
20050259586 Hafid et al. Nov 2005 A1
20050289231 Harada et al. Dec 2005 A1
20060023721 Miyake et al. Feb 2006 A1
20060036610 Wang Feb 2006 A1
20060036733 Fujimoto et al. Feb 2006 A1
20060064478 Sirkin Mar 2006 A1
20060069774 Chen et al. Mar 2006 A1
20060069804 Miyake et al. Mar 2006 A1
20060077926 Rune Apr 2006 A1
20060092950 Arregoces et al. May 2006 A1
20060098645 Walkin May 2006 A1
20060112170 Sirkin May 2006 A1
20060168319 Trossen Jul 2006 A1
20060187901 Cortes et al. Aug 2006 A1
20060190997 Mahajani et al. Aug 2006 A1
20060209789 Gupta et al. Sep 2006 A1
20060230129 Swami et al. Oct 2006 A1
20060233100 Luft et al. Oct 2006 A1
20060251057 Kwon et al. Nov 2006 A1
20060277303 Hegde et al. Dec 2006 A1
20060280121 Matoba Dec 2006 A1
20070019543 Wei et al. Jan 2007 A1
20070086382 Narayanan et al. Apr 2007 A1
20070094396 Takano et al. Apr 2007 A1
20070118881 Mitchell et al. May 2007 A1
20070156919 Potti et al. Jul 2007 A1
20070165622 O'Rourke et al. Jul 2007 A1
20070185998 Touitou et al. Aug 2007 A1
20070195792 Chen et al. Aug 2007 A1
20070230337 Igarashi et al. Oct 2007 A1
20070245090 King et al. Oct 2007 A1
20070259673 Willars et al. Nov 2007 A1
20070283429 Chen et al. Dec 2007 A1
20070286077 Wu Dec 2007 A1
20070288247 Mackay Dec 2007 A1
20070294209 Strub et al. Dec 2007 A1
20080031263 Ervin et al. Feb 2008 A1
20080101396 Miyata May 2008 A1
20080109452 Patterson May 2008 A1
20080109870 Sherlock et al. May 2008 A1
20080134332 Keohane et al. Jun 2008 A1
20080162679 Maher et al. Jul 2008 A1
20080228781 Chen et al. Sep 2008 A1
20080250099 Shen et al. Oct 2008 A1
20080263209 Pisharody et al. Oct 2008 A1
20080271130 Ramamoorthy Oct 2008 A1
20080282254 Blander et al. Nov 2008 A1
20080291911 Lee et al. Nov 2008 A1
20090049198 Blinn et al. Feb 2009 A1
20090070470 Bauman et al. Mar 2009 A1
20090077651 Poeluev Mar 2009 A1
20090092124 Singhal et al. Apr 2009 A1
20090106830 Maher Apr 2009 A1
20090138606 Moran et al. May 2009 A1
20090138945 Savchuk May 2009 A1
20090141634 Rothstein et al. Jun 2009 A1
20090164614 Christian et al. Jun 2009 A1
20090172093 Matsubara Jul 2009 A1
20090213858 Dolganow et al. Aug 2009 A1
20090222583 Josefsberg et al. Sep 2009 A1
20090227228 Hu et al. Sep 2009 A1
20090228547 Miyaoka et al. Sep 2009 A1
20090262741 Jungck et al. Oct 2009 A1
20090271472 Scheifler et al. Oct 2009 A1
20090313379 Rydnell et al. Dec 2009 A1
20100004004 Browne-Swinburne et al. Jan 2010 A1
20100008229 Bi et al. Jan 2010 A1
20100023621 Ezolt et al. Jan 2010 A1
20100036952 Hazlewood et al. Feb 2010 A1
20100054139 Chun et al. Mar 2010 A1
20100061319 Aso et al. Mar 2010 A1
20100064008 Yan et al. Mar 2010 A1
20100082787 Kommula et al. Apr 2010 A1
20100083076 Ushiyama Apr 2010 A1
20100094985 Abu-Samaha et al. Apr 2010 A1
20100098417 Tse-Au Apr 2010 A1
20100106833 Banerjee et al. Apr 2010 A1
20100106854 Kim et al. Apr 2010 A1
20100128606 Patel et al. May 2010 A1
20100162378 Jayawardena et al. Jun 2010 A1
20100210265 Borzsei et al. Aug 2010 A1
20100211669 Dalgas et al. Aug 2010 A1
20100217793 Preiss Aug 2010 A1
20100217819 Chen et al. Aug 2010 A1
20100223630 Degenkolb et al. Sep 2010 A1
20100228819 Wei Sep 2010 A1
20100228878 Xu et al. Sep 2010 A1
20100235507 Szeto et al. Sep 2010 A1
20100235522 Chen et al. Sep 2010 A1
20100235880 Chen et al. Sep 2010 A1
20100238828 Russell Sep 2010 A1
20100265824 Chao et al. Oct 2010 A1
20100268814 Cross et al. Oct 2010 A1
20100293296 Hsu et al. Nov 2010 A1
20100312740 Clemm et al. Dec 2010 A1
20100318631 Shukla Dec 2010 A1
20100322252 Suganthi et al. Dec 2010 A1
20100330971 Selitser et al. Dec 2010 A1
20100333101 Pope et al. Dec 2010 A1
20110007652 Bai Jan 2011 A1
20110019550 Bryers Jan 2011 A1
20110023071 Li et al. Jan 2011 A1
20110029599 Pulleyn et al. Feb 2011 A1
20110032941 Quach et al. Feb 2011 A1
20110040826 Chadzelek et al. Feb 2011 A1
20110047294 Singh et al. Feb 2011 A1
20110060831 Ishii et al. Mar 2011 A1
20110060840 Susai et al. Mar 2011 A1
20110093522 Chen et al. Apr 2011 A1
20110099403 Miyata et al. Apr 2011 A1
20110110294 Valluri et al. May 2011 A1
20110145324 Reinart et al. Jun 2011 A1
20110153834 Bharrat Jun 2011 A1
20110178985 San Martin Arribas et al. Jul 2011 A1
20110185073 Jagadeeswaran et al. Jul 2011 A1
20110191773 Pavel et al. Aug 2011 A1
20110196971 Reguraman et al. Aug 2011 A1
20110276695 Maldaner Nov 2011 A1
20110276982 Nakayama et al. Nov 2011 A1
20110289496 Steer Nov 2011 A1
20110292939 Subramaian et al. Dec 2011 A1
20110302256 Sureshehandra et al. Dec 2011 A1
20110307541 Walsh et al. Dec 2011 A1
20120008495 Shen et al. Jan 2012 A1
20120023231 Ueno Jan 2012 A1
20120026897 Guichard et al. Feb 2012 A1
20120030341 Jensen et al. Feb 2012 A1
20120066371 Patel et al. Mar 2012 A1
20120084419 Kannan et al. Apr 2012 A1
20120084460 McGinnity et al. Apr 2012 A1
20120106355 Ludwig May 2012 A1
20120117571 Davis et al. May 2012 A1
20120144014 Natham et al. Jun 2012 A1
20120144015 Jalan et al. Jun 2012 A1
20120151353 Joanny Jun 2012 A1
20120170548 Rajagopalan et al. Jul 2012 A1
20120173759 Agarwal et al. Jul 2012 A1
20120179770 Jalan et al. Jul 2012 A1
20120191839 Maynard Jul 2012 A1
20120239792 Banerjee et al. Sep 2012 A1
20120240185 Kapoor et al. Sep 2012 A1
20120290727 Tivig Nov 2012 A1
20120297046 Raja et al. Nov 2012 A1
20120311116 Jalan et al. Dec 2012 A1
20130046876 Narayana et al. Feb 2013 A1
20130058335 Koponen Mar 2013 A1
20130074177 Varadhan et al. Mar 2013 A1
20130083725 Mallya et al. Apr 2013 A1
20130100958 Jalan et al. Apr 2013 A1
20130103817 Koponen et al. Apr 2013 A1
20130124713 Feinberg et al. May 2013 A1
20130136139 Zheng et al. May 2013 A1
20130148500 Sonoda et al. Jun 2013 A1
20130166762 Jalan et al. Jun 2013 A1
20130173795 McPherson Jul 2013 A1
20130176854 Chisu et al. Jul 2013 A1
20130191486 Someya et al. Jul 2013 A1
20130198385 Han et al. Aug 2013 A1
20130250765 Ehsan et al. Sep 2013 A1
20130250770 Zou et al. Sep 2013 A1
20130258846 Damola Oct 2013 A1
20130268646 Doron et al. Oct 2013 A1
20130282791 Kruglick Oct 2013 A1
20130336159 Previdi et al. Dec 2013 A1
20140012972 Han Jan 2014 A1
20140164617 Jalan et al. Jun 2014 A1
20140169168 Jalan et al. Jun 2014 A1
20140207845 Han et al. Jul 2014 A1
20140226658 Kakadia et al. Aug 2014 A1
20140235249 Jeong et al. Aug 2014 A1
20140248914 Aoyagi et al. Sep 2014 A1
20140258465 Li Sep 2014 A1
20140258536 Chiong Sep 2014 A1
20140269728 Jalan et al. Sep 2014 A1
20140286313 Fu et al. Sep 2014 A1
20140298091 Carlen et al. Oct 2014 A1
20140325649 Zhang Oct 2014 A1
20140330982 Jalan et al. Nov 2014 A1
20140334485 Jain et al. Nov 2014 A1
20140359052 Joachimpillai et al. Dec 2014 A1
20150039671 Jalan et al. Feb 2015 A1
20150098333 Lin et al. Apr 2015 A1
20150156223 Xu et al. Jun 2015 A1
20150215436 Kancherla Jul 2015 A1
20150237173 Virkki et al. Aug 2015 A1
20150281087 Jalan et al. Oct 2015 A1
20150281104 Golshan et al. Oct 2015 A1
20150296058 Jalan et al. Oct 2015 A1
20150312268 Ray Oct 2015 A1
20150333988 Jalan et al. Nov 2015 A1
20150350048 Sampat et al. Dec 2015 A1
20150350379 Jalan et al. Dec 2015 A1
20160014052 Han Jan 2016 A1
20160036778 Chen et al. Feb 2016 A1
20160042014 Jalan et al. Feb 2016 A1
20160043901 Sankar et al. Feb 2016 A1
20160044095 Sankar et al. Feb 2016 A1
20160050233 Chen et al. Feb 2016 A1
20160088074 Kannan et al. Mar 2016 A1
20160094470 Skog Mar 2016 A1
20160105395 Chen et al. Apr 2016 A1
20160105446 Chen et al. Apr 2016 A1
20160119382 Chen et al. Apr 2016 A1
20160139910 Ramanathan et al. May 2016 A1
20160156708 Jalan et al. Jun 2016 A1
20160164792 Oran Jun 2016 A1
20160173579 Jalan et al. Jun 2016 A1
Foreign Referenced Citations (108)
Number Date Country
1372662 Oct 2002 CN
1449618 Oct 2003 CN
1529460 Sep 2004 CN
1575582 Feb 2005 CN
1714545 Dec 2005 CN
1725702 Jan 2006 CN
101004740 Jul 2007 CN
101094225 Dec 2007 CN
101163336 Apr 2008 CN
101169785 Apr 2008 CN
101189598 May 2008 CN
101193089 Jun 2008 CN
101247349 Aug 2008 CN
101261644 Sep 2008 CN
102143075 Aug 2011 CN
102546590 Jul 2012 CN
102571742 Jul 2012 CN
102577252 Jul 2012 CN
102918801 Feb 2013 CN
103533018 Jan 2014 CN
103944954 Jul 2014 CN
104040990 Sep 2014 CN
104067569 Sep 2014 CN
104106241 Oct 2014 CN
104137491 Nov 2014 CN
104796396 Jul 2015 CN
102577252 Mar 2016 CN
102918801 May 2016 CN
102571742 Jul 2016 CN
104067569 Feb 2017 CN
1209876 May 2002 EP
1770915 Apr 2007 EP
1885096 Feb 2008 EP
02296313 Mar 2011 EP
2577910 Apr 2013 EP
2622795 Aug 2013 EP
2647174 Oct 2013 EP
2760170 Jul 2014 EP
2772026 Sep 2014 EP
2901308 Aug 2015 EP
2760170 Dec 2015 EP
2772026 Feb 2017 EP
1182560 Nov 2013 HK
1183569 Dec 2013 HK
1183996 Jan 2014 HK
1189438 Jun 2014 HK
1198565 May 2015 HK
1198848 Jun 2015 HK
1199153 Jun 2015 HK
1199779 Jul 2015 HK
1200617 Aug 2015 HK
261CHE2014 Jul 2016 IN
1668CHENP2015 Jul 2016 IN
H09-097233 Apr 1997 JP
1999096128 Apr 1999 JP
H11-338836 Oct 1999 JP
2000276432 Oct 2000 JP
2000307634 Nov 2000 JP
2001051859 Feb 2001 JP
2001298449 Oct 2001 JP
2002091936 Mar 2002 JP
2003141068 May 2003 JP
2003186776 Jul 2003 JP
2005141441 Jun 2005 JP
2006332825 Dec 2006 JP
2008040718 Feb 2008 JP
2009500731 Jan 2009 JP
2013528330 May 2011 JP
2014504484 Feb 2014 JP
2014-143686 Aug 2014 JP
2015507380 Mar 2015 JP
5855663 Dec 2015 JP
5913609 Apr 2016 JP
5946189 Jun 2016 JP
5963766 Aug 2016 JP
1020080008340 Jan 2008 KR
10-0830413 May 2008 KR
1020120117461 Aug 2013 KR
101576585 Dec 2015 KR
101632187 Jun 2016 KR
0113228 Feb 2001 WO
WO0114990 Mar 2001 WO
WO0145349 Jun 2001 WO
WO03103237 Dec 2003 WO
WO2004084085 Sep 2004 WO
WO2006098033 Sep 2006 WO
2008053954 May 2008 WO
WO2008078593 Jul 2008 WO
2011049770 Apr 2011 WO
WO2011079381 Jul 2011 WO
2011149796 Dec 2011 WO
2012050747 Apr 2012 WO
2012075237 Jun 2012 WO
WO2012083264 Jun 2012 WO
WO2012097015 Jul 2012 WO
2013070391 May 2013 WO
2013081952 Jun 2013 WO
2013096019 Jun 2013 WO
2013112492 Aug 2013 WO
WO2013189024 Dec 2013 WO
WO2014031046 Feb 2014 WO
WO2014052099 Apr 2014 WO
2014088741 Jun 2014 WO
2014093829 Jun 2014 WO
2014138483 Sep 2014 WO
2014144837 Sep 2014 WO
WO 2014179753 Nov 2014 WO
WO2015153020 Oct 2015 WO
Non-Patent Literature Citations (14)
Entry
Spatscheck et al., “Optimizing TCP Forwarder Performance”, IEEE/ACM Transactions on Networking, vol. 8, No. 2, Apr. 2000.
Kjaer et al. “Resource allocation and disturbance rejection in web servers using SLAs and virtualized servers”, IEEE Transactions on Network and Service Management, IEEE, US, vol. 6, No. 4, Dec. 1, 2009.
Sharifian et al. “An approximation-based load-balancing algorithm with admission control for cluster web servers with dynamic workloads”, The Journal of Supercomputing, Kluwer Academic Publishers, BO, vol. 53, No. 3, Jul. 3, 2009.
IN Journal38/2015, Sep. 25, 2015, Jalan et al.
Cardellini et al., “Dynamic Load Balancing on Web-server Systems”, IEEE Internet Computing, vol. 3, No. 3, pp. 28-39, May-Jun. 1999.
Goldszmidt et al. NetDispatcher: A TCP Connection Router, IBM Research Report RC 20853, May 19, 1997.
Noguchi, “Realizing the Highest Level Layer 7″ Switch” = Totally Managing Network Resources, Applications, and Users =, Computer & Network LAN, Jan. 1, 2000, vol. 18, No. 1, p. 109-112.
Takahashi, “The Fundamentals of the Windows Network: Understanding the Mystery of the Windows Network from the Basics”, Network Magazine, Jul. 1, 2006, vol. 11, No. 7, p. 32-35.
Ohnuma, “AppSwitch: 7th Layer Switch Provided with Full Setup and Report Tools”, Interop Magazine, Jun. 1, 2000, vol. 10, No. 6, p. 148-150.
Koike et al., “Transport Middleware for Network-Based Control,” IEICE Technical Report, Jun. 22, 2000, vol. 100, No. 53, pp. 13-18.
Yamamoto et al., “Performance Evaluation of Window Size in Proxy-based TCP for Multi-hop Wireless Networks,” IPSJ SIG Technical Reports, May 15, 2008, vol. 2008, No. 44, pp. 109-114.
Abe et al., “Adaptive Split Connection Schemes in Advanced Relay Nodes,” IEICE Technical Report, Feb. 22, 2010, vol. 109, No. 438, pp. 25-30.
Gite, Vivek, “Linux Tune Network Stack (Buffers Size) to Increase Networking Performance,” nixCraft [online], Jul. 8, 2009 [retreived on Apr. 13, 2016], Retreived from the Internt: <URL:http://www.cyberciti.biz/faq/linux-tcp-tuning/>.
FreeBSD, “tcp—TCP Protocol,” Linux Programmer's Manual [online], Nov. 25, 2007 [retreived on Apr. 13, 2016], Retreived from the Internet: <URL:https://www.freebsd.org/cgi/man.cgi?query=tcp&apropos=0&sektion=7&manpath=SuSE+Linux%2Fi386+11.0&format=asci>.
Related Publications (1)
Number Date Country
20140089500 A1 Mar 2014 US
Provisional Applications (1)
Number Date Country
61705618 Sep 2012 US