Selective load balancing of network traffic

Information

  • Patent Grant
  • 11122114
  • Patent Number
    11,122,114
  • Date Filed
    Monday, August 12, 2019
    4 years ago
  • Date Issued
    Tuesday, September 14, 2021
    2 years ago
Abstract
In one embodiment, load balancing criteria and an indication of a plurality of network nodes is received. A plurality of forwarding entries are created based on the load balancing criteria and the indication of the plurality of nodes. A content addressable memory of a network element is programmed with the plurality of forwarding entries. The network element selectively load balances network traffic by applying the plurality of forwarding entries to the network traffic, wherein network traffic meeting the load balancing criteria is load balanced among the plurality of network nodes.
Description
TECHNICAL FIELD

This disclosure relates in general to the field of communications and, more particularly, to selective load balancing of network traffic.


BACKGROUND

A network element may include one or more ingress ports and one or more egress ports. The network element may receive network traffic through the ingress ports. As an example, network traffic may include one or more packets containing control information and data. The network element may perform various operations on the network traffic to select one or more of the egress ports for forwarding the network traffic. The network element then forwards the network traffic on to one or more devices coupled to the network element through the one or more egress ports.





BRIEF DESCRIPTION OF THE DRAWINGS

To provide a more complete understanding of the present disclosure and features and advantages thereof, reference is made to the following description, taken in conjunction with the accompanying figures, wherein like reference numerals represent like parts, in which:



FIG. 1 illustrates a block diagram of a system for selective load balancing of network traffic in accordance with certain embodiments.



FIG. 2 illustrates a block diagram of a network element that performs selective load balancing in accordance with certain embodiments.



FIG. 3 illustrates example load balancing criteria and traffic forwarding entries in accordance with certain embodiments.



FIG. 4 illustrates a block diagram of one or more network elements embodied within a chassis in accordance with certain embodiments.



FIG. 5 illustrates an example method for selectively load balancing network traffic in accordance with certain embodiments.





DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

Overview


In one embodiment, load balancing criteria and an indication of a plurality of network nodes is received. A plurality of forwarding entries are created based on the load balancing criteria and the indication of the plurality of nodes. A content addressable memory of a network element is programmed with the plurality of forwarding entries. The network element selectively load balances network traffic by applying the plurality of forwarding entries to the network traffic, wherein network traffic meeting the load balancing criteria is load balanced among the plurality of network nodes.


Example Embodiments


FIG. 1 illustrates a block diagram of a system 100 for selectively load balancing network traffic in accordance with certain embodiments. System 100 includes various network nodes 104 coupled to network element 108 via networks 112. In operation, network element 108 forwards network traffic (e.g., data packets) from one or more network nodes 104 or an internal component of network element 108 to one or more other network nodes 104 or an internal component of network element 108. Network element 108 may implement various load balancing criteria received from a network administrator associated with network element 108. As an example, a network administrator may instruct network element 108 to load balance traffic that matches the criteria and to forward traffic that does not meet the criteria in a normal manner. Thus, the network element 108 may provide customization of the traffic forwarding by a network administrator.


As the number of network nodes in a network increases, complexity in the network increases as well. As the network complexity increases, implementation of customized traffic forwarding rules may require additional hardware and/or software resources, power, and time to implement, particularly if the customization is implemented in a serial fashion. For example, if a user desires to selectively load balance traffic, the user may need to configure multiple pieces of equipment. For example, the user may configure a first line card to select particular traffic that is then sent to a second line card that performs the load balancing. Alternatively, a network appliance may be used to select and load balance traffic, but this would introduce latency that is undesirable in a network element, such as a network switch, that is used for high speed bridging and/or routing operations since a network appliance would perform these functions in software (i.e., a processor of the network appliance would execute instructions in order to perform these functions).


Various embodiments of the present disclosure provide systems and methods for simultaneous traffic selection and load-balancing operations. Such embodiments provide efficient utilization of network element 108's resources and faster operation than systems that perform traffic forwarding customization operations in a serial fashion and/or in software. In particular embodiments, a traffic selection command and a redirection command may be merged and may be applied to network traffic in a single clock cycle of network element 108.


Network element 108 may be any device or system operable to forward traffic in conjunction with customized rules. For example, network elements may include network switches, routers, servers (physical servers or servers virtually implemented on physical hardware), machines (physical machine or machines virtually implemented on physical hardware), end user devices, access points, cable boxes, gateways, bridges, loadbalancers, firewalls, inline service nodes, proxies, processors, modules; other suitable devices, components, elements, proprietary appliances, or objects operable to exchange, receive, and transmit information in a network environment; or a combination of two or more of these. A network element may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate operations associated with selectively load-balancing network traffic. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information. Network element 108 may be deployed in a data center, as an aggregation node (to aggregate traffic from a plurality of access domains), within a core network, or in other suitable configuration.


Similarly, a network node 104 may be any device or system operable to exchange, transmit, and or receive information in a network environment. For example, network nodes may include network switches, routers, servers (physical servers or servers virtually implemented on physical hardware) (e.g., servers 104a-d and 104f), machines (physical machine or machines virtually implemented on physical hardware), end user devices (such as laptop 104h, desktop computers 104e and 104i, smartphone 104j), access points (e.g., 104g), cable boxes, gateways, bridges, loadbalancers, firewalls, inline service nodes, proxies, processors, modules; or any other suitable devices, components, elements, proprietary appliances, objects operable to exchange, receive, and transmit information in a network environment; or a combination of two or more of these. A network node 104 may include any suitable hardware, software, components, modules, interfaces, or objects that facilitate its communications operations. This may be inclusive of appropriate algorithms and communication protocols that allow for the effective exchange of data or information.


A network node 104 or a network element 108 may include one or more portions of one or more computer systems. In particular embodiments, one or more of these computer systems may perform one or more steps of one or more methods described or illustrated herein. In particular embodiments, one or more computer systems may provide functionality described or illustrated herein. In some embodiments, encoded software running on one or more computer systems may perform one or more steps of one or more methods described or illustrated herein and/or provide functionality described or illustrated herein. The components of the one or more computer systems may comprise any suitable physical form, configuration, number, type, and/or layout. Where appropriate, one or more computer systems may be unitary or distributed, span multiple locations, span multiple machines, or reside in a cloud, which may include one or more cloud components in one or more networks.


A network 112 represents a series of points, nodes, or network elements of interconnected communication paths for receiving and transmitting packets of information that propagate through a communication system. A network offers a communicative interface between sources and/or hosts, and may be any local area network (LAN), wireless local area network (WLAN), metropolitan area network (MAN), Intranet, Extranet, Internet, WAN, virtual private network (VPN), or any other appropriate architecture or system that facilitates communications in a network environment depending on the network topology. A network can comprise any number of hardware or software elements coupled to (and in communication with) each other through a communications medium. In some embodiments, a network may simply comprise a cable (e.g., an Ethernet cable), air, or other transmission medium.


In one particular instance, the architecture of the present disclosure can be associated with a service provider deployment. In other examples, the architecture of the present disclosure would be equally applicable to other communication environments, such as an enterprise wide area network (WAN) deployment. The architecture of the present disclosure may include a configuration capable of transmission control protocol/internet protocol (TCP/IP) communications for the transmission and/or reception of packets in a network.



FIG. 2 illustrates a block diagram of a network element 108 in accordance with certain embodiments. In the embodiment depicted, network element 108 includes a computer system to facilitate performance of its operations. In particular embodiments, a computer system may include a processor, memory, storage, one or more communication interfaces, and a display. As an example, network element 108 comprises a computer system that includes one or more processors 202, memory 204, storage 206, and one or more communication interfaces 210. These components may work together in order to provide functionality described herein. Network element may also comprise forwarding logic 208. Forwarding logic 208 may be operable to implement user-specified traffic forwarding rules to traffic received via communication interface 210 and send the results to communication interface 210 for forwarding out of the appropriate port of network element 108.


Communication interface 210 may be used for the communication of signaling and/or data between network element 108 and one or more networks (e.g., 112a or 112b) and/or network nodes 104 coupled to a network 112. For example, communication interface 210 may be used to send and receive network traffic such as data packets. Each communication interface 210 may send and receive data and/or signals according to a distinct standard such as Asynchronous Transfer Mode (ATM), Frame Relay, or Gigabit Ethernet (or other IEEE 802.3 standard). In a particular embodiment, communication interface 210 comprises one or more ports that may each function as an ingress and/or egress port. As one example, communication interface 210 may comprise a plurality of Ethernet ports.


Processor 202 may be a microprocessor, controller, or any other suitable computing device, resource, or combination of hardware, stored software and/or encoded logic operable to provide, either alone or in conjunction with other components of network element 108, network element functionality. In some embodiments, network element 108 may utilize multiple processors to perform the functions described herein.


The processor can execute any type of instructions to achieve the operations detailed herein in this Specification. In one example, the processor could transform an element or an article (e.g., data) from one state or thing to another state or thing. In another example, the activities outlined herein may be implemented with fixed logic or programmable logic (e.g., software/computer instructions executed by the processor) and the elements identified herein could be some type of a programmable processor, programmable digital logic (e.g., a field programmable gate array (FPGA), an erasable programmable read only memory (EPROM), an electrically erasable programmable ROM (EEPROM)) or an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof.


Memory 204 and/or storage 206 may comprise any form of volatile or non-volatile memory including, without limitation, magnetic media (e.g., one or more tape drives), optical media, random access memory (RAM), read-only memory (ROM), flash memory, removable media, or any other suitable local or remote memory component or components. Memory 204 and/or storage 206 may store any suitable data or information utilized by network element 108, including software embedded in a computer readable medium, and/or encoded logic incorporated in hardware or otherwise stored (e.g., firmware). Memory 204 and/or storage 206 may also store the results and/or intermediate results of the various calculations and determinations performed by processor 202.


In certain example implementations, the customized traffic forwarding functions outlined herein may be implemented by logic encoded in one or more non-transitory, tangible media (e.g., embedded logic provided in an application specific integrated circuit (ASIC), digital signal processor (DSP) instructions, software (potentially inclusive of object code and source code) to be executed by one or more processors, or other similar machine, etc.). In some of these instances, one or more memory elements can store data used for the operations described herein. This includes the memory element being able to store instructions (e.g., software, code, etc.) that are executed to carry out the activities described in this Specification.


Any of the memory items discussed herein should be construed as being encompassed within the broad term ‘memory element.’ Similarly, any of the potential processing elements, modules, and machines described in this Specification should be construed as being encompassed within the broad term ‘processor.’


In one implementation, a network element 108 described herein may include software to achieve (or to facilitate) the functions discussed herein for customized traffic forwarding where the software is executed on one or more processors 202 to carry out the functions. This could include the implementation of one or more instances of an operating system 212, policy updater 214, and/or any other suitable elements that foster the activities discussed herein. In other embodiments, one or more of these elements may be implemented in hardware and/or firmware such as reprogrammable logic in an FPGA or ASIC.


In some embodiments, the operating system 212 provides an application program interface (API) that allows a network administrator to provide information to the network element 108. For example, the API may allow the network administrator to specify traffic customization information such as one or more load balancing commands (which may include load balancing criteria). In various embodiments, a network administrator may specify the traffic customization information through one or more interfaces, such as a command-line interface (CLI) (e.g., manually entered or entered via a script) or a graphical user interface (GUI) using any suitable programming language (e.g., Extensible Markup Language (xml) or Python).


The operating system 212 may be capable of communicating the traffic customization information received from the network administrator to other portions of network element 108 (e.g., to forwarding logic 208). In particular embodiments, the operating system 212 is operable to utilize a policy updater 214 to program logic of network element 108 based on traffic customization information received by the operating system 212 (e.g., from the network administrator).


In various embodiments, the operating system 212 receives load balancing commands and communicates with forwarding logic 208 to implement these commands. In various embodiments, these commands are converted into a format suitable for use by forwarding logic 208 (e.g., “forwarding entries” as described herein) before being communicated to forwarding logic 208. In other embodiments, the load balancing commands are received by the operating system 212 in a format used by forwarding logic 208, such that no conversion is needed. In yet other embodiments, forwarding logic 208 may convert the load balancing commands into a format suitable for use by forwarding logic 208. In some embodiments, a load balancing command may specify that it should be applied to a single port of network element 108 or to multiple ports of the network element.


A load balancing command may specify that traffic matching certain criteria should be load balanced among a plurality of network nodes. Any suitable matching criteria may be specified, such as one or more identifiers associated with the source and/or destination of an incoming data packet. For example, the matching criteria may include one or more source addresses (e.g., IP addresses, media access control (MAC) addresses, or other addresses identifiable in a data packet) and/or one or more destination addresses (e.g., IP addresses, MAC addresses, or other addresses). In some embodiments, the matching criteria may alternatively or additionally include one or more protocols (e.g., one or more L3 protocols such as IPv4 or IPv6, one or more L4 protocols such as TCP or User Datagram Protocol (UDP)), one or more quality of service parameters (QoS), one or more virtual local area network (VLAN) identifiers, and/or other suitable information associated with (e.g., specified by) the packet. As another example, the matching criteria may include one or more source or destination L4 ports associated with (e.g., specified by) the packet.


A load balancing command may specify a load balancing scheme. For example, with respect to the embodiment depicted in FIG. 1, a load balancing scheme may specify how traffic forwarded by network element 108 is to be distributed among servers 104a-d. Network element 108 may load balance among any number of suitable network nodes 104, such as firewalls, application servers, other load balancers (e.g., load balancers that perform load balancing in software), inspection devices, etc.


In particular embodiments, a user may provide a load balancing command specifying that particular traffic is load balanced while other traffic is not load balanced (e.g., the other traffic may be blocked or routed normally through a forwarding table). In one embodiment, a network administrator or other entity associated with network element 108 may specify one or more destination addresses (e.g., a virtual IP address or range of virtual IP addresses of the network element 108) and one or more L4 parameters (such as one or more L4 protocols and/or L4 destination ports) as load balancing criteria. Thus traffic matching this criteria will be load balanced among available load balancing network nodes while traffic not matching this criteria will be handled in another manner (e.g., according to a forwarding table). In some embodiments, this criteria may be applied to traffic received at a particular port, at a group of logically associated ports, or at all ports of the network element 108.


In some embodiments, a load balancing command may be expressed at a higher level of abstraction than one or more corresponding forwarding entries that are created based on the load balancing command. For example, a load balancing command may merely specify that network traffic is to be split evenly among available servers of a device group (e.g., the four servers 104a-d) while the resulting forwarding entries may specify matching criteria and redirection information to implement the load balancing scheme specified by the load balancing command. As an example, network element 108 may receive a load balancing command to load balance incoming traffic among a plurality of network nodes and may create a forwarding entry for each network node that specifies a distinct range of source IP addresses. Thus, when incoming network traffic matches the address range specified in a particular forwarding entry, the network traffic is redirected to the network node specified in the forwarding entry. In various embodiments, the forwarding entries may have other load balancing criteria that must also be met in order to be applied to incoming network traffic, such as any of the criteria described above.


The load balancing commands may be generated by any suitable entity, such as the network administrator or various features of network element 208. When a load balancing command is generated or received by a component of network element 108, the load balancing command may be passed to the operating system 212 which then communicates the command or resulting forwarding entries to port selection logic 220. In various embodiments, operating system 212 or other network element component may update the forwarding entries resulting from the load balancing command in response to a change in network topology (e.g., when an additional network node 104 becomes available to load balance or one of the network nodes 104a-d goes down). In particular embodiments, this may include changing a range of source IP addresses specified in each forwarding entry such that network traffic is distributed evenly (or otherwise) among the available network nodes 104 in accordance with the load balancing command.


In particular embodiments, operating system 212 creates one or more additional forwarding entries after generating the forwarding entries from the load balancing command(s) and/or other commands. For example, if the existing forwarding entries do not cover each possible scenario, a default forwarding entry (that may be applied if no other match is found) may be generated that denies all traffic (e.g., if the forwarding entries include one or more entries permitting certain traffic) or permits all traffic (e.g., if the forwarding entries include one or more entries denying certain traffic). In various embodiments, the traffic forwarding entries may be placed in order of priority such that a traffic forwarding entry with a higher priority is checked for a match with a packet to be forwarded before the traffic forwarding entry with the lower priority is checked for a match with the packet. In other embodiments, traffic forwarding entries may each have a priority assigned to them, such that if network traffic matches multiple traffic forwarding entries, the traffic forwarding entry with the highest priority will be applied to the traffic. In some embodiments, a default forwarding entry (e.g., a forwarding entry specifying that all traffic should be permitted) has the lowest priority of the traffic forwarding entries. In various embodiments, the priorities of the traffic forwarding entries are based on user-specified rules associated with the load balancing and/or other commands that are merged to form the traffic forwarding entries.


As mentioned earlier, the policy updater 214 may be responsible for sending the forwarding entries to the forwarding logic 208 to be implemented. As one example, the policy updater 214 may instruct that the forwarding entries be programmed into a memory such as a content addressable memory (e.g., TCAM 224) of the port selection logic 220 (e.g., by calling a hardware driver associated with the TCAM).


Forwarding logic 208 is operable to apply the forwarding entries to network traffic received by network element 108. In the embodiment depicted, forwarding logic 208 includes parsing logic 216, key construction logic 218, port selection logic 220, and packet modification logic 222. In various embodiments, any suitable portion of forwarding logic 208 may comprise programmable logic (e.g., software/computer instructions executed by a processor), fixed logic, programmable digital logic (e.g., an FPGA, an EPROM, an EEPROM, or other device), an ASIC that includes digital logic, software, code, electronic instructions, or any suitable combination thereof. In a particular embodiment, forwarding logic 208 comprises an ASIC or other device that is operable to perform customized traffic forwarding in hardware by utilizing logic (e.g., one or more memories such as TCAM 224) that is reprogrammable by an entity (e.g., the operating system 212) based on traffic customization information (e.g., received from a network administrator). In such an embodiment, the functions of parsing logic 216, key construction logic 218, port selection logic 220, and packet modification logic 222 are performed in hardware by such logic (in contrast to an implementation where such functions may be performed through software instructions executed by a network processor). Reconfiguration of the logic may be performed by storing different values in memory of the forwarding logic 208 such as TCAM 224 or other memory element. In various embodiments, the values stored in the memory may provide control inputs to forwarding logic 208, but are not typical instructions that are part of an instruction set executed by a processor. By implementing this logic in hardware, the network element 108 may process incoming traffic (e.g., switch/bridge the traffic) at much higher speeds (e.g., at line rate) than an appliance that utilizes a network processor to process incoming network traffic.


Parsing logic 216 may be operable to receive packets from the ingress ports of network element 108. The parsing logic 216 may be configured to parse information from a received packet. Parsing logic 216 may be configured to parse any suitable information, such as one or more protocols associated with (e.g., included within) the packet, a source address (e.g., IP address, MAC address, or other address) of the packet, a destination address (e.g., IP address, MAC address, or other address) of the packet, one or more ports (e.g., source or destination L4 port) associated with the packet, a VLAN identifier, a quality of service (QoS value), or other suitable information from the packet. In some embodiments, the information to be parsed by parsing logic 216 is based on the information needed for various forwarding entries of network element 108 (which could include forwarding entries associated with various different ports of network element 108). In some embodiments, the parsing logic 216 is configured on a port-by-port basis, such that packets from each port may be parsed based on the forwarding entries associated with that port.


The information parsed by parsing logic 126 is passed to key construction logic 218. Key construction logic constructs a key from the output of the parsing logic 126. The key may contain all or a portion of the information parsed from a packet. The key is then passed to the port selection logic 220.


Prior to receiving a key associated with a data packet, port selection logic 208 may receive forwarding entries (or commands) from operating system 212 and configure itself to implement the forwarding entries. For example, port selection logic 208 may store forwarding entries associated with a particular port in a content addressable memory, such as a TCAM 224. When a packet is received on that port, the key generated by key construction logic 218 (and any other suitable information associated with the packet) may be passed to the port selection logic 220. The port selection logic 220 uses the key to perform a lookup in the TCAM 224. Port selection logic 220 will then forward the traffic through the appropriate port of network element 108 in accordance with the forwarding entry that matches the information in the key from the packet (and has the highest priority if multiple forwarding entries match the key). If the packet is to be redirected (e.g., because the key matches the specified load balancing criteria), packet modification logic may modify the appropriate fields of the packet (e.g., destination IP address and/or destination MAC address) before the packet is forwarded out of the appropriate egress port of network element 108. If the packet is not to be redirected according to load balancing criteria, then the usual forwarding process may be applied to the packet. For example, port selection logic 218 may access a forwarding table (e.g., based on a destination address of the packet) to determine which port to forward the packet to. In some embodiments, the forwarding table is stored in a separate memory (e.g., static random access memory) from the forwarding entries (e.g., TCAM 224).



FIG. 3 illustrates example load balancing criteria and traffic forwarding entries in accordance with certain embodiments. In the embodiment depicted, block 300 represents example load balancing criteria and block 350 represents example traffic forwarding entries 352 and 354. In various embodiments, such entries could be utilized by forwarding logic 208 (e.g., the entries may be stored in TCAM 224 and utilized by hardware to forward incoming network traffic).


The load balancing criteria in block 300 specify a destination IP address expressed as an IP address (“200.200.0.0”) and a mask (“255.255.255.255”). When compared against a destination IP address of an incoming data packet, the mask may be applied to the IP address of the packet (e.g., a logical AND operation may be applied with the mask and the destination IP address) and the result is compared against the IP address specified in the load balancing criteria to determine whether a match occurs. This allows specification of one IP address or multiple IP addresses using a common format (i.e., IP address and mask). In various embodiments, the destination IP address(es) specified in the load balancing criteria may be one or more virtual IP addresses of network element 108.


The example load balancing criteria also depicts an L4 protocol (“TCP”) and an L4 port (“80”). Thus the load balancing criteria in this depiction specifies that network traffic specifying a destination IP address of 200.200.0.0, an L4 protocol of TCP, and a destination L4 port of 80 (thus signifying Hypertext Transfer Protocol (HTTP) traffic) will be load balanced. Other protocols and/or ports may be specified in the load balancing criteria. For example, if the L4 protocol is TCP and/or UDP, the L4 destination port could be 20 (signifying File Transfer Protocol (FTP) data traffic), 25 (signifying Simple Mail Transfer Protocol (SMTP) traffic), 53 (signifying Domain Name System (DNS) traffic), other suitable port number, or a combination of any of these.


As depicted the load balancing criteria is associated with a device group. A device group may be one or more network nodes 104 associated with load balancing criteria. In the embodiment depicted, the network nodes 104 are depicted by IP addresses (“1.1.1.1”, “1.1.1.2”, “1.1.1.3”, and “1.1.1.4”), though network nodes may be identified in any suitable manner. The network traffic matching the destination IP range, L4 protocol, and L4 destination port specified by the load balancing criteria may be load balanced among the network nodes specified by the device group.


Block 350 represent traffic forwarding entries that may be produced based on the load balancing criteria specified in block 300. The forwarding entries 352 each correspond to a network node in the device group. Each network node is coupled to a port of the network element 108 identified by one of the port identifiers (e.g., 0x60, 0x61, 0x5f, and 0x62). Each forwarding entry 352 specifies that traffic having a destination IP address of 200.200.0.0, an L4 protocol of TCP, and an L4 port 80 will be redirected to the specified port (and corresponding network node) based on its source IP address. As explained in the load balancing criteria, the IP address ranges may be specified in IP address/mask format (where the mask is applied to the IP address of the traffic and compared against the IP address specified in the IP range), though in other embodiments the ranges may be specified in any suitable manner. Each of the forwarding entries 352 will result in the redirection of traffic matching the load balancing criteria to a different port based on the value of the last octet of the source IP address of the traffic. In this example, the traffic is load balanced evenly across the network nodes of the device group, though in other embodiments a heavier load of traffic could be redirected to a particular network node if desired by specifying a larger range of source IP addresses in the forwarding entry corresponding to that network node.


In the embodiment depicted, block 350 also depicts a forwarding entry that permits traffic regardless of the source IP address or destination IP address. For example, entry 354 denotes allowable IP addresses for the source IP address and the destination IP address in Classless Inter-Domain Routing (CIDR) notation, though any suitable notation may be used. In the embodiment depicted, this forwarding entry would have a lower priority than forwarding entries 352 such that it would only be applied if network traffic didn't match any of the forwarding entries 352. The permitted traffic that does not match one of the load balancing forwarding entries would be forwarded in a normal manner (e.g., based on a destination MAC address of the packet using a forwarding table).


This embodiment is a simplified example. In other embodiments, other actions may be applied to incoming traffic. For example, particular traffic could be redirected, blocked, or permitted according to any suitable criteria set by the network administrator, network element 108, and/or other entity.



FIG. 4 illustrates a block diagram 400 of one or more network elements embodied within a chassis 402 in accordance with certain embodiments. Chassis 402 may include various slots configured to electrically and mechanically couple to various circuit boards (e.g., line cards), such as one or more supervisor module(s) 404, one or more network elements(s) 406, one or more fabric module(s) 408, one or more power supplies (410), one or more fan trays 412, or other components. In various embodiments, a network element 408 may correspond to network element 108. In other embodiments, the entire chassis 402 may correspond to network element 108.


A supervisor module 404 may include a computer system with at least one processor and may be operable to scale the control plane, management, and data plane services for the chassis and its components. A supervisor module 404 may control the Layer 2 and 3 services, redundancy capabilities, configuration management, status monitoring, power and environmental management of the chassis and its components. In some embodiments, supervisor module 404 provides centralized arbitration to the system fabric for all line cards.


Cisco NX-OS is designed to support distributed multithreaded processing on symmetric multiprocessors (SMPs), multicore CPUs, and distributed line-card processors. Computationally intensive tasks, such as hardware table programming, can be offloaded to dedicated processors distributed across the line cards. Cisco NX-OS modular processes may be instantiated on demand, each in a separate protected memory space. Thus, processes are started and system resources allocated only when a feature is enabled.


In a particular embodiment supervisor module 404 receives commands from users, processes these commands, and sends relevant configuration information to the network elements 406. For example, a user may send a load balancing or other command to supervisor module 404. Supervisor module may generate traffic forwarding entries based on the rules. Supervisor module 404 may also determine which ports the commands apply to and then send the forwarding entries to the relevant network element 406.


Network element 406 may include a distributed forwarding engine for L2/L3 forwarding. Network element 406 may include integrated hardware support for protecting the supervisor CPU from excessive traffic; for providing ACL counters and logging capability, for providing Layer 2 to Layer 4 ACL for both IPv4 and IPv6 traffic, and any other characteristics described herein with respect to network element 108.


Fabric module 408 is capable of coupling the various network elements 406 in the chassis together (e.g., through their respective ports). In connection with the supervisor module 404 and network elements 406, the fabric module 408 may provide virtual output queuing (VoQ) and credit-based arbitration to a crossbar switch to increase performance of the distributed forwarding system implemented by chassis 402.


Chassis 402 may also include one or more power supplies 410 for powering the various components of chassis 402 and one or more fan trays 412 for cooling the various components of chassis 402.



FIG. 5 illustrates an example method for selectively load balancing network traffic in accordance with certain embodiments. The method begins at step 502, where load balancing criteria is received (e.g., from a network administrator). At step 504, traffic forwarding entries based are formed based on the load balancing criteria and programmed into a memory of a network element 108.


At step 506, network traffic is received. At step 508 it is determined whether the network traffic matches the load balancing criteria specified in step 502. If it does, the network traffic is load balanced among a group of network nodes associated with the load balancing criteria at step 510. If it does not, the network traffic may be forwarded in a normal manner. For example, the traffic may be blocked based on other forwarding entries or forwarded based on a forwarding table of the network element.


Some of the steps illustrated in FIG. 5 may be repeated, combined, modified or deleted where appropriate, and additional steps may also be added to the flowchart. Additionally, steps may be performed in any suitable order without departing from the scope of particular embodiments.


It is also important to note that the steps in FIG. 5 illustrate only some of the possible scenarios that may be executed by, or within, the network elements described herein. Some of these steps may be deleted or removed where appropriate, or these steps may be modified or changed considerably without departing from the scope of the present disclosure. In addition, a number of these operations may have been described as being executed concurrently with, or in parallel to, one or more additional operations. However, the timing of these operations may be altered considerably. The preceding operational flows have been offered for purposes of example and discussion. Substantial flexibility is provided by the network elements 108 in that any suitable arrangements, chronologies, configurations, and timing mechanisms may be provided without departing from the teachings of the present disclosure.


Additionally, it should be noted that with the examples provided above, interaction may be described in terms of one or more network elements. However, this has been done for purposes of clarity and example only. In certain cases, it may be easier to describe one or more of the functionalities of a given set of flows by only referencing a limited number of network elements. It should be appreciated that the systems described herein are readily scalable and, further, can accommodate a large number of components, as well as more complicated/sophisticated arrangements and configurations. Accordingly, the examples provided should not limit the scope or inhibit the broad techniques of selectively load balancing network traffic, as potentially applied to a myriad of other architectures.


Numerous other changes, substitutions, variations, alterations, and modifications may be ascertained to one skilled in the art and it is intended that the present disclosure encompass all such changes, substitutions, variations, alterations, and modifications as falling within the scope of the appended claims. In order to assist the United States Patent and Trademark Office (USPTO) and, additionally, any readers of any patent issued on this application in interpreting the claims appended hereto, Applicant wishes to note that the Applicant: (a) does not intend any of the appended claims to invoke paragraph six (6) of 35 U.S.C. section 112 as it exists on the date of the filing hereof unless the words “means for” or “step for” are specifically used in the particular claims; and (b) does not intend, by any statement in the specification, to limit this disclosure in any way that is not otherwise reflected in the appended claims.

Claims
  • 1. A method comprising: programming a network element with a plurality of forwarding entries to enable load balancing of network traffic, each of the plurality of forwarding entries including one of a plurality of variable ranges of internet protocol (IP) addresses, each of the plurality of variable ranges including a first IP address defining a low end of a respective one of the plurality of variable ranges and a second IP address defining a high end of the respective one of the plurality of variable ranges;receiving, via the network element, the network traffic; andperforming, via the network element, the load balancing of the network traffic based on the plurality of variable ranges.
  • 2. The method of claim 1, further comprising: receiving load balancing criteria, the load balancing criteria defining a portion of the plurality of variable ranges using one or more destination IP addresses.
  • 3. The method of claim 2, wherein the one or more destination IP addresses corresponds to one or more virtual IP addresses of the network element.
  • 4. The method of claim 1, further comprising: receiving load balancing criteria with a layer 4 protocol.
  • 5. The method of claim 1, further comprising: receiving load balancing criteria with a layer 4 destination port.
  • 6. The method of claim 1, further comprising: receiving load balancing criteria; andaccessing a forwarding table of the network element for network traffic that does not meet the load balancing criteria,wherein, the forwarding table of the network element is accessed to determine an egress port for the network traffic that does not meet the load balancing criteria,the forwarding table is based on a destination IP address of a packet of the network traffic that does not meet the load balancing criteria, andthe forwarding table is stored in another memory separate from a content addressable memory of the network element with the plurality of forwarding entries.
  • 7. The method of claim 1, wherein each of the plurality of forwarding entries specifies another indication of a port to forward network traffic matching load balancing criteria and comprises a source IP address within a range of source IP addresses.
  • 8. The method of claim 1, wherein the network element determines whether any of the plurality of forwarding entries applies to a data packet of the network traffic in a single clock cycle of the network element.
  • 9. The method of claim 1, further comprising: receiving load balancing criteria via a command line interface from a user of the network element.
  • 10. The method of claim 1, wherein the plurality of forwarding entries are stored in a ternary content-addressable memory (TCAM) of the network element.
  • 11. An apparatus comprising: at least one memory with instructions; anda processor configured to execute the instructions and cause performance of operations comprising: programming a plurality of forwarding entries to enable load balancing of network traffic, each of the plurality of forwarding entries including a plurality of variable ranges of internet protocol (IP) addresses, each of the plurality of variable ranges including a first IP address defining a low end of a respective one of the plurality of variable ranges and a second IP address defining a high end of the respective one of the plurality of variable ranges,receiving the network traffic, andperforming the load balancing of the network traffic based on the plurality of variable ranges.
  • 12. The apparatus of claim 11, wherein the at least one memory includes a ternary content addressable memory.
  • 13. The apparatus of claim 11, wherein each of the plurality of forwarding entries includes an identifier indicating a port through which to forward network traffic matching load balancing criteria and a source IP address within a range of source IP addresses.
  • 14. The apparatus of claim 11, wherein the operations include receiving a load balancing criteria, the load balancing criteria defining a portion of the plurality of variable ranges using one or more destination IP addresses, a layer 4 protocol, and a layer 4 destination port.
  • 15. The apparatus of claim 14, wherein the one or more destination IP addresses corresponds to one or more virtual IP addresses of the apparatus.
  • 16. A computer-readable non-transitory medium comprising one or more instructions that, when executed by a processor, cause the processor to perform operations comprising: programming a network element with a plurality of forwarding entries to enable load balancing of network traffic, each of the plurality of forwarding entries including a plurality of variable ranges of internet protocol (IP) addresses, each of the plurality of variable ranges including a first IP address defining a low end of a respective one of the plurality of variable ranges and a second IP address defining a high end of the respective one of the plurality of variable ranges;receiving, via the network element, the network traffic; andperforming the load balancing of the network traffic based on the plurality of variable ranges.
  • 17. The medium of claim 16, wherein the operations include receiving load balancing criteria via a command line interface from a user of the network element.
  • 18. The medium of claim 16, wherein each of the plurality of forwarding entries includes an indication of a port to forward network traffic matching load balancing criteria and a source IP address within a range of source IP addresses.
  • 19. The medium of claim 16, wherein the operations include receiving load balancing criteria, the load criteria defining a portion of the plurality of variable ranges using one or more destination IP addresses, a layer 4 protocol, and a layer 4 destination port.
  • 20. The method of claim 1, further comprising: increasing a range of source IP addresses of the plurality of variable ranges to increase a load of traffic directed to a particular node.
RELATED APPLICATIONS

The instant application is a Continuation of, and claims priority to, U.S. patent application Ser. No. 14/693,925, entitled SELECTIVE LOAD BALANCING OF NETWORK TRAFFIC, filed Apr. 23, 2015, which claims the benefit of U.S. Provisional Application Ser. No. 62/143,081, entitled SYSTEMS AND METHODS FOR PRUNING AND LOAD BALANCING NETWORK TRAFFIC, filed Apr. 4, 2015, the contents of which are herein incorporated by reference in their entireties.

US Referenced Citations (370)
Number Name Date Kind
6108782 Fletcher et al. Aug 2000 A
6178453 Mattaway et al. Jan 2001 B1
6298153 Oishi Oct 2001 B1
6343290 Cossins et al. Jan 2002 B1
6643260 Kloth et al. Nov 2003 B1
6683873 Kwok et al. Jan 2004 B1
6721804 Rubin et al. Apr 2004 B1
6733449 Krishnamurthy et al. May 2004 B1
6735631 Oehrke et al. May 2004 B1
6885670 Regula Apr 2005 B1
6996615 McGuire Feb 2006 B1
7028098 Mate et al. Apr 2006 B2
7054930 Cheriton May 2006 B1
7062571 Dale et al. Jun 2006 B1
7076397 Ding et al. Jul 2006 B2
7277948 Igarashi et al. Oct 2007 B2
7313667 Pullela et al. Dec 2007 B1
7480672 Hahn et al. Jan 2009 B2
7496043 Leong et al. Feb 2009 B1
7536476 Alleyne May 2009 B1
7567504 Darling et al. Jul 2009 B2
7684322 Sand et al. Mar 2010 B2
7808897 Mehta et al. Oct 2010 B1
7881957 Cohen et al. Feb 2011 B1
7917647 Cooper et al. Mar 2011 B2
8028071 Mahalingam et al. Sep 2011 B1
8041714 Aymeloglu et al. Oct 2011 B2
8171415 Appleyard et al. May 2012 B2
8234377 Cohn Jul 2012 B2
8244559 Horvitz et al. Aug 2012 B2
8250215 Stienhans et al. Aug 2012 B2
8280880 Aymeloglu et al. Oct 2012 B1
8284664 Aybay et al. Oct 2012 B1
8284776 Petersen Oct 2012 B2
8301746 Head et al. Oct 2012 B2
8345692 Smith Jan 2013 B2
8406141 Couturier et al. Mar 2013 B1
8407413 Yucel et al. Mar 2013 B1
8448171 Donnellan et al. May 2013 B2
8477610 Zuo et al. Jul 2013 B2
8495252 Lais et al. Jul 2013 B2
8495356 Ashok et al. Jul 2013 B2
8510469 Portolani Aug 2013 B2
8514868 Hill Aug 2013 B2
8532108 Li et al. Sep 2013 B2
8533687 Greifeneder et al. Sep 2013 B1
8547974 Guruswamy et al. Oct 2013 B1
8560663 Baucke et al. Oct 2013 B2
8590050 Nagpal et al. Nov 2013 B2
8611356 Yu et al. Dec 2013 B2
8630291 Shaffer et al. Jan 2014 B2
8639787 Lagergren et al. Jan 2014 B2
8660129 Brendel et al. Feb 2014 B1
8719804 Jain May 2014 B2
8775576 Hebert et al. Jul 2014 B2
8805951 Faibish et al. Aug 2014 B1
8850182 Fritz et al. Sep 2014 B1
8856339 Mestery et al. Oct 2014 B2
8909780 Dickinson et al. Dec 2014 B1
8909928 Ahmad et al. Dec 2014 B2
8918510 Gmach et al. Dec 2014 B2
8924720 Raghuram et al. Dec 2014 B2
8930747 Levijarvi et al. Jan 2015 B2
8938775 Roth et al. Jan 2015 B1
8977754 Curry, Jr. et al. Mar 2015 B2
9009697 Breiter et al. Apr 2015 B2
9015324 Jackson Apr 2015 B2
9043439 Bicket et al. May 2015 B2
9049115 Rajendran et al. Jun 2015 B2
9063789 Beaty et al. Jun 2015 B2
9065727 Liu et al. Jun 2015 B1
9075649 Bushman et al. Jul 2015 B1
9104334 Madhusudana et al. Aug 2015 B2
9111013 Cheriton Aug 2015 B2
9164795 Vincent Oct 2015 B1
9167050 Durazzo et al. Oct 2015 B2
9201704 Chang et al. Dec 2015 B2
9203784 Chang et al. Dec 2015 B2
9223634 Chang et al. Dec 2015 B2
9244776 Koza et al. Jan 2016 B2
9264478 Hon et al. Feb 2016 B2
9313048 Chang et al. Apr 2016 B2
9361192 Smith et al. Jun 2016 B2
9380075 He et al. Jun 2016 B2
9432294 Sharma et al. Aug 2016 B1
9444744 Sharma et al. Sep 2016 B1
9473365 Melander et al. Oct 2016 B2
9503530 Niedzielski Nov 2016 B1
9558078 Farlee et al. Jan 2017 B2
9613078 Vermeulen et al. Apr 2017 B2
9628471 Sundaram et al. Apr 2017 B1
9632858 Sasturkar et al. Apr 2017 B2
9658876 Chang et al. May 2017 B2
9692802 Bicket et al. Jun 2017 B2
9727359 Tsirkin Aug 2017 B2
9736063 Wan et al. Aug 2017 B2
9755858 Bagepalli et al. Sep 2017 B2
9792245 Raghavan et al. Oct 2017 B2
9804988 Ayoub et al. Oct 2017 B1
9825902 Sharma et al. Nov 2017 B1
9954783 Thirumurthi et al. Apr 2018 B1
10382534 Sharma Aug 2019 B1
20020004900 Patel Jan 2002 A1
20020073337 Ioele et al. Jun 2002 A1
20020143928 Maltz et al. Oct 2002 A1
20020166117 Abrams et al. Nov 2002 A1
20020174216 Shorey et al. Nov 2002 A1
20030018591 Komisky Jan 2003 A1
20030056001 Mate et al. Mar 2003 A1
20030228585 Inoko et al. Dec 2003 A1
20040004941 Malan et al. Jan 2004 A1
20040095237 Chen et al. May 2004 A1
20040264481 Darling et al. Dec 2004 A1
20050060418 Sorokopud Mar 2005 A1
20050125424 Herriott et al. Jun 2005 A1
20050281257 Yazaki Dec 2005 A1
20060039374 Belz Feb 2006 A1
20060059558 Selep et al. Mar 2006 A1
20060087962 Golia et al. Apr 2006 A1
20060104286 Cheriton May 2006 A1
20060120575 Ahn et al. Jun 2006 A1
20060126665 Ward et al. Jun 2006 A1
20060155875 Cheriton Jul 2006 A1
20060294207 Barsness et al. Dec 2006 A1
20070011330 Dinker et al. Jan 2007 A1
20070242830 Conrado et al. Oct 2007 A1
20080005293 Bhargava et al. Jan 2008 A1
20080084880 Dharwadkar Apr 2008 A1
20080165778 Ertemalp Jul 2008 A1
20080201711 Amir Husain Aug 2008 A1
20080235755 Blaisdell et al. Sep 2008 A1
20090006527 Gingell, Jr. et al. Jan 2009 A1
20090010277 Halbraich et al. Jan 2009 A1
20090019367 Cavagnari et al. Jan 2009 A1
20090083183 Rao et al. Mar 2009 A1
20090138763 Arnold May 2009 A1
20090177775 Radia et al. Jul 2009 A1
20090265468 Annambhotla et al. Oct 2009 A1
20090265753 Anderson et al. Oct 2009 A1
20090293056 Ferris Nov 2009 A1
20090300608 Ferris et al. Dec 2009 A1
20090313562 Appleyard et al. Dec 2009 A1
20090323706 Germain et al. Dec 2009 A1
20090328031 Pouyadou et al. Dec 2009 A1
20100042720 Stienhans et al. Feb 2010 A1
20100061250 Nugent Mar 2010 A1
20100131765 Bromley et al. May 2010 A1
20100191783 Mason et al. Jul 2010 A1
20100192157 Jackson et al. Jul 2010 A1
20100205601 Abbas et al. Aug 2010 A1
20100211782 Auradkar et al. Aug 2010 A1
20100217886 Seren et al. Aug 2010 A1
20100318609 Lahiri et al. Dec 2010 A1
20100325199 Park et al. Dec 2010 A1
20100325257 Goel et al. Dec 2010 A1
20100325441 Laurie et al. Dec 2010 A1
20100333116 Prahlad et al. Dec 2010 A1
20110016214 Jackson Jan 2011 A1
20110035754 Srinivasan Feb 2011 A1
20110055396 Dehaan Mar 2011 A1
20110055398 Dehaan et al. Mar 2011 A1
20110055470 Portolani Mar 2011 A1
20110072489 Parann-Nissany Mar 2011 A1
20110075667 Li et al. Mar 2011 A1
20110110382 Jabr et al. May 2011 A1
20110116443 Yu et al. May 2011 A1
20110126099 Anderson et al. May 2011 A1
20110138055 Daly et al. Jun 2011 A1
20110145390 Kakadia et al. Jun 2011 A1
20110145413 Dawson et al. Jun 2011 A1
20110173303 Rider Jul 2011 A1
20110185063 Head et al. Jul 2011 A1
20110199902 Leavy et al. Aug 2011 A1
20110213687 Ferris et al. Sep 2011 A1
20110213966 Fu et al. Sep 2011 A1
20110219434 Betz et al. Sep 2011 A1
20110231899 Pulier et al. Sep 2011 A1
20110239039 Dieffenbach et al. Sep 2011 A1
20110252327 Awasthi et al. Oct 2011 A1
20110261811 Battestilli et al. Oct 2011 A1
20110261828 Smith Oct 2011 A1
20110276675 Singh et al. Nov 2011 A1
20110276951 Jain Nov 2011 A1
20110295998 Ferris et al. Dec 2011 A1
20110305149 Scott et al. Dec 2011 A1
20110307531 Gaponenko et al. Dec 2011 A1
20110320870 Kenigsberg et al. Dec 2011 A1
20120005724 Lee Jan 2012 A1
20120023418 Frields et al. Jan 2012 A1
20120054367 Ramakrishnan et al. Mar 2012 A1
20120072318 Akiyama et al. Mar 2012 A1
20120072578 Alam Mar 2012 A1
20120072581 Tung et al. Mar 2012 A1
20120072985 Davne et al. Mar 2012 A1
20120072992 Arasaratnam et al. Mar 2012 A1
20120084445 Brock et al. Apr 2012 A1
20120084782 Chou et al. Apr 2012 A1
20120096134 Suit Apr 2012 A1
20120102193 Rathore et al. Apr 2012 A1
20120102199 Hopmann et al. Apr 2012 A1
20120117571 Davis et al. May 2012 A1
20120131174 Ferris et al. May 2012 A1
20120137215 Kawara May 2012 A1
20120158967 Sedayao et al. Jun 2012 A1
20120166649 Watanabe et al. Jun 2012 A1
20120167094 Suit Jun 2012 A1
20120173541 Venkatarannani Jul 2012 A1
20120173710 Rodriguez Jul 2012 A1
20120179909 Sagi et al. Jul 2012 A1
20120180044 Donnellan et al. Jul 2012 A1
20120182891 Lee et al. Jul 2012 A1
20120185632 Lais et al. Jul 2012 A1
20120185913 Martinez et al. Jul 2012 A1
20120192016 Gotesdyner et al. Jul 2012 A1
20120192075 Ebtekar et al. Jul 2012 A1
20120201135 Ding et al. Aug 2012 A1
20120203908 Beaty et al. Aug 2012 A1
20120204169 Breiter et al. Aug 2012 A1
20120204187 Breiter et al. Aug 2012 A1
20120214506 Skaaksrud et al. Aug 2012 A1
20120222106 Kuehl Aug 2012 A1
20120240113 Hur Sep 2012 A1
20120265976 Spiers et al. Oct 2012 A1
20120281706 Agarwal et al. Nov 2012 A1
20120290647 Ellison et al. Nov 2012 A1
20120297238 Watson et al. Nov 2012 A1
20120311106 Morgan Dec 2012 A1
20120311568 Jansen Dec 2012 A1
20120324092 Brown et al. Dec 2012 A1
20120324114 Dutta et al. Dec 2012 A1
20130003567 Gallant et al. Jan 2013 A1
20130036213 Hasan et al. Feb 2013 A1
20130044636 Koponen et al. Feb 2013 A1
20130066940 Shao Mar 2013 A1
20130069950 Adam et al. Mar 2013 A1
20130080509 Wang Mar 2013 A1
20130091557 Gurrapu, Sr. Apr 2013 A1
20130097601 Podvratnik et al. Apr 2013 A1
20130111540 Sabin May 2013 A1
20130117337 Dunham May 2013 A1
20130124712 Parker May 2013 A1
20130125124 Kempf et al. May 2013 A1
20130138816 Kuo et al. May 2013 A1
20130144978 Jain et al. Jun 2013 A1
20130152076 Patel Jun 2013 A1
20130152175 Hromoko et al. Jun 2013 A1
20130159496 Hamilton et al. Jun 2013 A1
20130160008 Cawlfield et al. Jun 2013 A1
20130162753 Hendrickson et al. Jun 2013 A1
20130179941 McGloin et al. Jul 2013 A1
20130182712 Aguayo et al. Jul 2013 A1
20130185413 Beaty et al. Jul 2013 A1
20130185433 Zhu et al. Jul 2013 A1
20130191106 Kephart et al. Jul 2013 A1
20130198050 Shroff et al. Aug 2013 A1
20130198374 Zalmanovitch et al. Aug 2013 A1
20130204849 Chacko Aug 2013 A1
20130232491 Radhakrishnan et al. Sep 2013 A1
20130232492 Wang Sep 2013 A1
20130246588 Borowicz et al. Sep 2013 A1
20130297769 Chang et al. Nov 2013 A1
20130318240 Hebert et al. Nov 2013 A1
20130318546 Kothuri et al. Nov 2013 A1
20130339949 Spiers et al. Dec 2013 A1
20140006481 Frey et al. Jan 2014 A1
20140006535 Reddy Jan 2014 A1
20140006585 Dunbar et al. Jan 2014 A1
20140016476 Dietz et al. Jan 2014 A1
20140019639 Ueno Jan 2014 A1
20140040473 Ho et al. Feb 2014 A1
20140040883 Tompkins Feb 2014 A1
20140052877 Mao Feb 2014 A1
20140075048 Yuksel et al. Mar 2014 A1
20140075108 Dong et al. Mar 2014 A1
20140075357 Flores et al. Mar 2014 A1
20140075501 Srinivasan et al. Mar 2014 A1
20140108985 Scott et al. Apr 2014 A1
20140122560 Ramey et al. May 2014 A1
20140141720 Princen et al. May 2014 A1
20140156557 Zeng et al. Jun 2014 A1
20140160924 Pfautz et al. Jun 2014 A1
20140164486 Ravichandran et al. Jun 2014 A1
20140189095 Lindberg et al. Jul 2014 A1
20140189125 Amies et al. Jul 2014 A1
20140222953 Karve et al. Aug 2014 A1
20140269266 Filsfils et al. Sep 2014 A1
20140280805 Sawalha Sep 2014 A1
20140282536 Dave et al. Sep 2014 A1
20140282611 Campbell et al. Sep 2014 A1
20140282669 McMillan Sep 2014 A1
20140282889 Ishaya et al. Sep 2014 A1
20140297569 Clark et al. Oct 2014 A1
20140317261 Shatzkamer et al. Oct 2014 A1
20140366155 Chang et al. Dec 2014 A1
20140369204 Anand et al. Dec 2014 A1
20140372567 Ganesh et al. Dec 2014 A1
20140372582 Ghanwani et al. Dec 2014 A1
20150006470 Mohan Jan 2015 A1
20150043335 Testicioglu et al. Feb 2015 A1
20150043576 Dixon et al. Feb 2015 A1
20150052247 Threefoot et al. Feb 2015 A1
20150052517 Raghu et al. Feb 2015 A1
20150058382 St. Laurent et al. Feb 2015 A1
20150058459 Amendjian et al. Feb 2015 A1
20150058557 Madhusudana et al. Feb 2015 A1
20150070516 Shoemake et al. Mar 2015 A1
20150071285 Kumar et al. Mar 2015 A1
20150081762 Mason et al. Mar 2015 A1
20150089478 Cheluvaraju et al. Mar 2015 A1
20150100471 Curry, Jr. et al. Apr 2015 A1
20150106802 Ivanov et al. Apr 2015 A1
20150106805 Melander et al. Apr 2015 A1
20150109923 Hwang Apr 2015 A1
20150117458 Gurkan et al. Apr 2015 A1
20150120914 Wada et al. Apr 2015 A1
20150149828 Mukerji et al. May 2015 A1
20150215819 Bosch et al. Jul 2015 A1
20150227405 Jan et al. Aug 2015 A1
20150242204 Hassine et al. Aug 2015 A1
20150271199 Bradley et al. Sep 2015 A1
20150281067 Wu Oct 2015 A1
20150281113 Siciliano et al. Oct 2015 A1
20150319063 Zourzouvillys et al. Nov 2015 A1
20150326524 Tankala et al. Nov 2015 A1
20150373108 Fleming et al. Dec 2015 A1
20150379062 Vermeulen et al. Dec 2015 A1
20160062786 Meng et al. Mar 2016 A1
20160065417 Sapuram et al. Mar 2016 A1
20160094398 Choudhury et al. Mar 2016 A1
20160094643 Jain et al. Mar 2016 A1
20160094894 Inayatullah et al. Mar 2016 A1
20160099847 Melander et al. Apr 2016 A1
20160099873 Gerö et al. Apr 2016 A1
20160103838 Sainani et al. Apr 2016 A1
20160105393 Thakkar et al. Apr 2016 A1
20160127184 Bursell May 2016 A1
20160134557 Steinder et al. May 2016 A1
20160147676 Cha et al. May 2016 A1
20160162436 Raghavan et al. Jun 2016 A1
20160188527 Cherian et al. Jun 2016 A1
20160226755 Hammam et al. Aug 2016 A1
20160253078 Ebtekar et al. Sep 2016 A1
20160254968 Ebtekar et al. Sep 2016 A1
20160261564 Foxhoven et al. Sep 2016 A1
20160277368 Narayanaswamy et al. Sep 2016 A1
20160292611 Boe et al. Oct 2016 A1
20160352682 Chang Dec 2016 A1
20160378389 Hrischuk et al. Dec 2016 A1
20170005948 Melander et al. Jan 2017 A1
20170024260 Chandrasekaran et al. Jan 2017 A1
20170026470 Bhargava et al. Jan 2017 A1
20170034199 Zaw Feb 2017 A1
20170041342 Efremov et al. Feb 2017 A1
20170054659 Ergin et al. Feb 2017 A1
20170063674 Maskalik et al. Mar 2017 A1
20170097841 Chang et al. Apr 2017 A1
20170099188 Chang et al. Apr 2017 A1
20170104755 Arregoces et al. Apr 2017 A1
20170126583 Xia May 2017 A1
20170147297 Krishnamurthy et al. May 2017 A1
20170163569 Koganti Jun 2017 A1
20170192823 Karaje et al. Jul 2017 A1
20170264663 Bicket et al. Sep 2017 A1
20170302521 Lui et al. Oct 2017 A1
20170310556 Knowles et al. Oct 2017 A1
20170317932 Paramasivam Nov 2017 A1
20170339070 Chang et al. Nov 2017 A1
20180069885 Patterson et al. Mar 2018 A1
20180173372 Greenspan et al. Jun 2018 A1
20180174060 Velez-Rojas et al. Jun 2018 A1
Foreign Referenced Citations (14)
Number Date Country
101719930 Jun 2010 CN
101394360 Jul 2011 CN
102164091 Aug 2011 CN
102918499 Feb 2013 CN
104320342 Jan 2015 CN
105740084 Jul 2016 CN
2228719 Sep 2010 EP
2439637 Apr 2012 EP
2645253 Nov 2014 EP
10-2015-0070676 May 2015 KR
M394537 Dec 2010 TW
WO 2009155574 Dec 2009 WO
WO 2010030915 Mar 2010 WO
WO 2013158707 Oct 2013 WO
Non-Patent Literature Citations (58)
Entry
Al-Harbi, S.H., et al., “Adapting k-means for supervised clustering,” Jun. 2006, Applied Intelligence, vol. 24, Issue 3, pp. 219-226.
Amedro, Brian, et al., “An Efficient Framework for Running Applications on Clusters, Grids and Cloud,” 2010, 17 pages.
Author Unknown, “5 Benefits of a Storage Gateway in the Cloud,” Blog, TwinStrata, Inc., Jul. 25, 2012, XP055141645, 4 pages, https://web.archive.org/web/20120725092619/http://blog.twinstrata.com/2012/07/10//5-benefits-of-a-storage-gateway-in-the-cloud.
Author Unknown, “Joint Cisco and VMWare Solution for Optimizing Virtual Desktop Delivery: Data Center 3.0: Solutions to Accelerate Data Center Virtualization,” Cisco Systems, Inc. and VMware, Inc., Sep. 2008, 10 pages.
Author Unknown, “A Look at DeltaCloud: The Multi-Cloud API,” Feb. 17, 2012, 4 pages.
Author Unknown, “About Deltacloud,” Apache Software Foundation, Aug. 18, 2013, 1 page.
Author Unknown, “Architecture for Managing Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS0102, Jun. 18, 2010, 57 pages.
Author Unknown, “Cloud Infrastructure Management Interface—Common Information Model (CIMI-CIM),” Document No. DSP0264, Version 1.0.0, Dec. 14, 2012, 21 pages.
Author Unknown, “Cloud Infrastructure Management Interface (CIMI) Primer,” Document No. DSP2027, Version 1.0.1, Sep. 12, 2012, 30 pages.
Author Unknown, “cloudControl Documentation,” Aug. 25, 2013, 14 pages.
Author Unknown, “Interoperable Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS0101, Nov. 11, 2009, 21 pages.
Author Unknown, “Microsoft Cloud Edge Gateway (MCE) Series Appliance,” Iron Networks, Inc., 4 pages.
Author Unknown, “Open Data Center Alliance Usage: Virtual Machine (VM) Interoperability in a Hybrid Cloud Environment Rev. 1.2,” Open Data Center Alliance, Inc., 2013, 18 pages.
Author Unknown, “Real-Time Performance Monitoring on Juniper Networks Devices, Tips and Tools for Assessing and Analyzing Network Efficiency,” Juniper Networks, Inc., May 2010, 35 pages.
Author Unknown, “Use Cases and Interactions for Managing Clouds, A White Paper from the Open Cloud Standards Incubator,” Version 1.0.0, Document No. DSP-IS00103, Jun. 16, 2010, 75 pages.
Beyer, Steffen, “Module “Data::Locations?!”,” YAPC::Europe, London, UK,ICA, Sep. 22-24, 2000, XP002742700, 15 pages.
Bohner, Shawn A., “Extending Software Change Impact Analysis into COTS Components,” 2003, IEEE, 8 pages.
Borovick, Lucinda, et al., “Architecting the Network for the Cloud,” IDC White Paper, Jan. 2011, 8 pages.
Bosch, Greg, “Virtualization,” last modified Apr. 2012 by B. Davison, 33 pages.
Broadcasters Audience Research Board, “What's Next,” http://lwww.barb.co.uk/whats-next, accessed Jul. 22, 2015, 2 pages.
Cisco Systems, Inc. “Best Practices in Deploying Cisco Nexus 1000V Series Switches on Cisco UCS B and C Series Cisco UCS Manager Servers,” Cisco White Paper, Apr. 2011, 36 pages, http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9902/white_paper_c11-558242.pdf.
Cisco Systems, Inc., “Cisco Unified Network Services: Overcome Obstacles to Cloud-Ready Deployments,” Cisco White Paper, Jan. 2011, 6 pages.
Cisco Systems, Inc., “Cisco Intercloud Fabric: Hybrid Cloud with Choice, Consistency, Control and Compliance,” Dec. 10, 2014, 22 pages.
Cisco Technology, Inc., “Cisco Expands Videoscape TV Platform Into the Cloud,” Jan. 6, 2014, Las Vegas, Nevada, Press Release, 3 pages.
Citrix, “Citrix StoreFront 2.0” White Paper, Proof of Concept Implementation Guide, Citrix Systems, Inc., 2013, 48 pages.
Citrix, “CloudBridge for Microsoft Azure Deployment Guide,” 30 pages.
Citrix, “Deployment Practices and Guidelines for NetScaler 10.5 on Amazon Web Services,” White Paper, citrix.com, 2014, 14 pages.
CSS CORP, “Enterprise Cloud Gateway (ECG)—Policy driven framework for managing multi-cloud environments,” original published on or about Feb. 11, 2012; 1 page; http://www.css-cloud.com/platform/enterprise-cloud-gateway.php.
De Canal, Marco, “Cloud Computing: Analisi Dei Modelli Architetturali E Delle Technologie Per Lo Sviluppo Di Applicazioni,” 2011-2012, 149 pages.
Fang K., “LISP MAC-EID-TO-RLOC Mapping (LISP based L2VPN),” Network Working Group, Internet Draft, CISCO Systems, Jan. 2012, 12 pages.
Gedymin, Adam, “Cloud Computing with an emphasis on Google App Engine,” Sep. 2011, 146 pages.
Good, Nathan A., “Use Apache Deltacloud to administer multiple instances with a single API,” Dec. 17, 2012, 7 pages.
Herry, William, “Keep It Simple, Stupid: OpenStack nova-scheduler and its algorithm”, May 12, 2012, IBM, 12 pages.
Hewlett-Packard Company, “Virtual context management on network devices”, Research Disclosure, vol. 564, No. 60, Apr. 1, 2011, Mason Publications, Hampshire, GB, Apr. 1, 2011, 524.
Hood, C. S., et al., “Automated Proactive Anomaly Detection,” 1997, Springer Science and Business Media Dordrecht , pp. 688-699.
Juniper Networks, Inc., “Recreating Real Application Traffic in Junosphere Lab,” Solution Brief, Dec. 2011, 3 pages.
Kenhui, “Musings on Cloud Computing and IT-as-a-Service: [Updated for Havana] Openstack Computer for VSphere Admins, Part 2: Nova-Scheduler and DRS”, Jun. 26, 2013, Cloud Architect Musings, 12 pages.
Kolyshkin, Kirill, “Virtualization in Linux,” Sep. 1, 2006, XP055141648, 5 pages, https://web.archive.org/web/20070120205111/http://download.openvz.org/doc/openvz-intro.pdf.
Lerach, S.R.O., “Golem,” http://www.lerach.cz/en/products/golem, accessed Jul. 22, 2015, 2 pages.
Linthicum, David, “VM Import could be a game changer for hybrid clouds”, InfoWorld, Dec. 23, 2010, 4 pages.
Logan, Marcus, “Hybrid Cloud Application Architecture for Elastic Java-Based Web Applications,” F5 Deployment Guide Version 1.1, 2016, 65 pages.
Meireles, Fernando Miguel Dias, “Integrated Management of Cloud Computing Resources,” 2013-2014, 286 pages.
Mu, Shuai, et al., “uLibCloud: Providing High Available and Uniform Accessing to Multiple Cloud Storages,” 2012 IEEE, 8 pages.
Naik, Vijay K., et al., “Harmony: A Desktop Grid for Delivering Enterprise Computations,” Grid Computing, 2003, Fourth International Workshop on Proceedings, Nov. 17, 2003, pp. 1-11.
Nair, Srijith K. et al., “Towards Secure Cloud Bursting, Brokerage and Aggregation,” 2012, 8 pages, www.flexiant.com.
Nielsen, “SimMetry Audience Measurement—Technology,” http://www.nielsen-admosphere.eu/products-and-services/simmetry-audience-measurement-technology/, accessed Jul. 22, 2015, 6 pages.
Nielsen, “Television,” http://www.nielsen.com/us/en/solutions/measurement/television.html, accessed. Jul. 22, 2015, 4 pages.
Open Stack, “Filter Scheduler,” updated Dec. 17, 2017, 5 pages, accessed on Dec. 18, 2017, https://docs.openstack.org/nova/latest/user/filter-scheduler.html.
Rabadan, J., et al., “Operational Aspects of Proxy-ARP/ND in EVPN Networks,” BESS Worksgroup Internet Draft, draft-snr-bess-evpn-proxy-arp-nd-02, Oct. 6, 2015, 22 pages.
Saidi, Ali, et al., “Performance Validation of Network-Intensive Workloads on a Full-System Simulator,” Interaction between Operating System and Computer Architecture Workshop, (IOSCA 2005), Austin, Texas, Oct. 2005, 10 pages.
Shunra, “Shunra for HP Software; Enabling Confidence in Application Performance Before Deployment,” 2010, 2 pages.
Son, Jungmin, “Automatic decision system for efficient resource selection and allocation in inter-clouds,” Jun. 2013, 35 pages.
Sun, Aobing, et al., “IaaS Public Cloud Computing Platform Scheduling Model and Optimization Analysis,” Int. J. Communications, Network and System Sciences, 2011, 4, 803-811, 9 pages.
Toews, Everett, “Introduction to Apache jclouds,” Apr. 7, 2014, 23 pages.
Vilalta R., et al., “An efficient approach to external cluster assessment with an application to martian topography,” Feb. 2007, 23 pages, Data Mining and Knowledge Discovery 14.1: 1-23. New York: Springer Science & Business Media.
Von Laszewski, Gregor, et al., “Design of a Dynamic Provisioning System for a Federated Cloud and Bare-metal Environment,” 2012, 8 pages.
Wikipedia, “Filter (software)”, Wikipedia, Feb. 8, 2014, 2 pages, https://en.wikipedia.org/w/index.php?title-Filter_%28software%29&oldid=594544359.
Wikipedia; “Pipeline (Unix)”, Wikipedia, May 4, 2014, 4 pages, https://en.wikibedia.org/w/index.php?title-Pipeline2/028Unix%29&oldid=606980114.
Related Publications (1)
Number Date Country
20190364102 A1 Nov 2019 US
Provisional Applications (1)
Number Date Country
62143081 Apr 2015 US
Continuations (1)
Number Date Country
Parent 14693925 Apr 2015 US
Child 16537966 US