Various exemplary embodiments disclosed herein relate generally to computer networking, and more particularly to cloud computing or use of data centers.
As cloud computing becomes more prevalent, enterprises and other entities are seeking to migrate varying types of applications into cloud data centers. Network Function Virtualization (NFV) has helped enable this migration of services into data centers. Some examples of virtual functions that may be run in a telecommunication service provider data center include Content Delivery, Evolved Packet Core (EPC), Customer Premises Equipment (CPE) and Radio Access. Applications or services frequently involve such a sequence of functions that are performed on the packets constituting the specific instance of the application or service. In cloud applications, each application or service frequently requires multiple virtual functions to run sequentially on a multiplicity of virtual machines. The sequence of such functions in a service chain may be stamped on the header of each packet belonging to the service chain for subsequent processing. The selection of which virtualized resources are to be used for the processing of each function for an instance of a packet flow belonging to a service is the topic of interest in the embodiments described below.
A brief summary of various exemplary embodiments is presented. Some simplifications and omissions may be made in the following summary, which is intended to highlight and introduce some aspects of the various exemplary embodiments, but not to limit the scope of the invention. Detailed descriptions of a preferred exemplary embodiment adequate to allow those of ordinary skill in the art to make and use the inventive concepts will follow in later sections.
Various exemplary embodiments relate to a method of balancing a load of inter-rack traffic in a data center including a plurality of racks. The method including receiving at a centralized load balancer, a path inquiry including a chain of functions for a service data flow; determining which virtual machine will perform each function of the chain of functions for the service data flow, wherein at least two functions of the chain of functions required in the service data flow are to be performed on the same rack; and assigning the service data flow to the determined virtual machines.
Various exemplary embodiments are described wherein the determining further includes: utilizing policy information to determine which virtual machines will process the service data flow.
Various exemplary embodiments are described wherein the determining further includes: utilizing current virtual machine capability information to determine which virtual machines will process the service data flow.
Various exemplary embodiments are described wherein identical load balancers are instantiated on two or more racks in the data center.
Various exemplary embodiments are described wherein the determining further includes: utilizing a round robin assignment algorithm to determine which virtual machines will process each instance of a virtual function of a service data flow.
Various exemplary embodiments are described further comprising: updating which virtual machine will perform at least one of the functions of the chain of functions.
Various exemplary embodiments are described including a non-transitory machine-readable storage medium encoded with instructions for execution by a centralized load balancer for balancing a load of inter-rack traffic in a data center including a plurality of racks, the medium including instructions for receiving at the centralized load balancer, a path inquiry including a chain of functions for a service data flow; instructions for determining which virtual machine will perform each function of the chain of functions for the service data flow, wherein at least two functions of the chain of functions required in the service data flow are to be performed on the same rack; and instructions for assigning the service data flow to the determined virtual machines.
Various exemplary embodiments are described, wherein the determining further includes: utilizing policy information to determine which virtual machines will process the service data flow.
Various exemplary embodiments are described wherein the determining further includes: utilizing current virtual machine capability information to determine which virtual machines will process the service data flow.
Various exemplary embodiments are described wherein identical load balancers are instantiated on two or more racks in the data center.
Various exemplary embodiments are described wherein the determining further includes: utilizing a round robin assignment algorithm to determine which virtual machines will process each instance of a virtual function of a service data flow.
Various exemplary embodiments are described, further comprising: updating which virtual machine will perform at least one of the functions of the chain of functions.
Various exemplary embodiments are described including a centralized load balancer for balancing a load of inter-rack traffic in a data center including a plurality of racks. The centralized load balancer including a memory configured to store a service data flow table; a processor configured to: receive at the centralized load balancer, a path inquiry including a chain of functions for a service data flow; determine which virtual machine will perform each function of the chain of functions for the service data flow, wherein at least two functions of the chain of functions required in the service data flow are to be performed on the same rack; and assign the service data flow to the determined virtual machines.
Various exemplary embodiments are described wherein the centralized load balancer is further configured to: utilize policy information to determine which virtual machines will process the service data flow.
Various exemplary embodiments are described wherein the centralized load balancer further is further configured to: utilize current virtual machine capability information to determine which virtual machine to provide the service data flow to.
Various exemplary embodiments are described wherein an identical centralized load balancer is instantiated on two racks in the data center.
Various exemplary embodiments are described wherein the centralized load balancer is further configured to: utilize a round robin assignment algorithm to determine which virtual machines will process the service data flow.
Various exemplary embodiments are described wherein the centralized load balancer is further configured to: update which virtual machine will perform at least one of the functions of the chain of functions.
In order to better understand various exemplary embodiments, reference is made to the accompanying drawings, wherein:
To facilitate understanding, identical reference numerals have been used to designate elements having substantially the same or similar structure or substantially the same or similar function.
Large volumes of inter-rack communication frequently cause latencies and delays in processing in data centers. For example, the configuration shown in
The description and drawings merely illustrate the principles of the invention. It will thus be appreciated that those skilled in the art will be able to devise various arrangements that, although not explicitly described or shown herein, embody the principles of the invention and are included within its scope. Furthermore, all examples recited herein are principally intended expressly to be only for pedagogical purposes to aid the reader in understanding the principles of the invention and the concepts contributed by the inventor(s) to furthering the art, and are to be construed as being without limitation to such specifically recited examples and conditions. Additionally, the term, “or,” as used herein, refers to a non-exclusive or (i.e., and/or), unless otherwise indicated (e.g., “or else” or “or in the alternative”). Also, the various embodiments described herein are not necessarily mutually exclusive, as some embodiments can be combined with one or more other embodiments to form new embodiments.
The cloud environment also includes multiple data centers 130, 140, 150. It will be apparent that fewer or additional data centers may exist within the cloud environment. The data centers 130, 140, 150 each include collections of hardware that may be dynamically allocated to supporting various cloud applications. In various embodiments, the data centers 130, 140, 150 may be geographically distributed; for example, data centers 130, 140, 150 may be located in Washington, D.C.; Seattle, Wash.; and Tokyo, Japan, respectively.
Each data center 130, 140, 150 includes host devices for supporting virtualized devices, such as virtual machines. For example, data center 150 is shown to include two host devices 155, 160, which may both include various hardware resources. It will be apparent that the data center 150 may include fewer or additional host devices and that the host devices may be connected to the network 120 and each other via one or more networking devices such as routers and switches. In various embodiments, the host devices 155, 160 may be personal computers, servers, blades, or any other device capable of contributing hardware resources to a cloud environment. Similarly, host devices 155, 160 may be put on a rack or multiple racks with one or more similar devices.
The various host devices 155, 160 may support one or more cloud-based applications. For example, host device 160 is shown to support multiple virtual machines, (VM) VM 1161, VM 2162, and VM 3163. As will be understood, a VM is an instance of an operating system and software running on hardware provided by a host device imitating dedicated resources as in a single machine. Various alternative or additional network functions will be apparent such as, for example, load balancers, and HTTPS. Such functionality may be provided as separate VMs or, as illustrated, in another type of virtualized device termed a “container.” As will be understood, a container is similar to a VM in that it provides virtualized functionality but, unlike a VM, does not include a separate OS instance and, instead, uses the OS or kernel of the underlying host system.
While the virtualized devices 161-163 are described as being co-resident on a single host device 160, it will be apparent that various additional configurations are possible. For example, one or more of the virtualized devices 161-163 may be hosted among one or more additional host devices 155 and/or racks within a data center, or among one or more additional data centers 130-150.
It will be apparent that while the exemplary cloud environment 100 is described in terms of a user device accessing a web application, the methods described herein may be applied to various alternative environments. For example, alternative environments may provide software as a service to a user tablet device or may provide backend processing to a non-end user server. Various alternative environments will be apparent.
According to various embodiments, the host device 160 implements a virtualized switch for directing messages received by the host device 160 to appropriate virtualized devices 161-163 or other devices or virtualized devices hosted on other host devices or in different data centers. As will be described in greater detail below, in some such embodiments, the virtualized switch is provided with instructions, such as code or configuration information, for forwarding traffic through a sequence of network function devices before being forwarded to the application VM. As such, the switch may forward traffic to locally hosted virtualized devices or to external devices or virtualized devices as well as a local or other types of load balancers.
The processor 220 may be any hardware device capable of executing instructions stored in memory 230 or software storage 260 or otherwise processing data. As such, the processor may include a microprocessor, field programmable gate array (FPGA), application-specific integrated circuit (ASIC), or other similar devices.
The memory 230 may include various memories such as, for example L1, L2, or L3 cache or system memory. As such, the memory 230 may include static random access memory (SRAM), dynamic RAM (DRAM), flash memory, read only memory (ROM), or other similar memory devices.
The user interface 240 may include one or more devices for enabling communication with a user such as an administrator. For example, the user interface 240 may include a display, a mouse, and a keyboard for receiving user commands. In some embodiments, the user interface 240 may include a command line interface or graphical user interface that may be presented to a remote terminal via the network interface 250.
The network interface 250 may include one or more devices for enabling communication with other hardware devices. For example, the network interface 250 may include a network interface card (NIC) configured to communicate according to the Ethernet protocol. Additionally, the network interface 250 may implement a TCP/IP stack for communication according to the TCP/IP protocols. Various alternative or additional hardware or configurations for the network interface 250 will be apparent.
The software storage 260 may include one or more machine-readable storage media such as read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, or similar storage media. In various embodiments, the software storage 260 may store instructions for execution by the processor 220 or data upon which the processor 220 may operate.
While the host device 200 is shown as including one of each described component, the various components may be duplicated in various embodiments. For example, the processor 220 may include multiple microprocessors that are configured to independently execute the methods described herein or are configured to perform steps or subroutines of the methods described herein such that the multiple processors cooperate to achieve the functionality described herein.
Service data flow content 335 may begin processing at or before entering a data center. Service data flow content 335 may have functions of service data chain 337 which are required for a specific packet or data type. For example, a mobile video packet may request processing of Baseband Unit (BBU), Server Gateway (SGW), Border Gateway (BGW) and Content Delivery Network (CDN) functions. The functions of service data chain 337 may be associated with service data flow content 335. For example, these functions may be chained as A, B, C and D, in service data chain 337 respectively as indicated in exemplary data center. Service data flows may refer to multiple data packets associated with the same source and destination addresses as well as data type. Service data flows may further be associated with specific port identifiers specific to the data center and/or rack configurations. When a first packet arrives, an entity such as an SDN controller, for example, may place the notification indicating that A, B, C and D are needed. Port identifications may already be stamped on a packet header.
In an embodiment shown in
Similar processing may occur for a different virtualized function on rack B 310. In one embodiment, rack A 305 may be capable of implementing BBU, rack B 310 may be capable of implementing a SGW and rack 315 may capable of implementing PGW.
Exemplary data center with localized service chaining which utilizes a centralized load balancer configuration 400, may reduce inter-rack load by integrating within each rack multiple constituents of a service chain. In some embodiments, a larger number of virtual function instances may be instantiated including one or more on each rack, rack 1415, rack 2420 and rack 3425. Each rack may process any part of a service chain and attempt to keep service data flows within each rack for entire processing. In some embodiments, smaller capacity and/or fewer instances of each function may be instantiated which may consume fewer resources per function. For example, a mobile video packet requesting processing of BBU, SGW, PGW and CDN virtual functions may be able to accomplish all four virtual functions processing on one rack.
A service data flow content 405 may indicate that it requires functions A, B, C and D from service data chain 407. Service data flow content 405 may include several packets of a service data flow associated with service data chain 407. Service data chain 407 may be received directly by the SDN controller when entering exemplary data center, via signaling. The centralized load balancer 410 may maintain a globalized view of the entire data center, including a view of virtual machines on all racks such as rack 1415, rack 2420 and rack 3425.
The centralized load balancer 420 may create a service data flow for service data flow content 405, deciding which virtual machines and/or which rack(s) to utilize for the service data flow based upon the service data chain 407. The centralized load balancer may utilize several policies and/or performance metrics to determine which virtual machines to utilize. The centralized load balancer may then through a SN controller set up the forwarding of packets of the service data flow 405 through the selected set of virtual machines of the service chain.
The service data flow content 405 may follow bearer path 445 upon entering the data center. Simultaneous to following bearer path 445, the virtual machine processing the service data flow content may communicate with centralized load balancer 410 via signaling path 450 to know which virtual machine to go to next. The service data flow may be updated dynamically. For example, once processing is done for function A on virtual machine A 430, the exiting packets may query centralized load balancer 410 to see which virtual machine to go to next. Centralized load balancer may indicate upon this query to go to virtual machine B 435 to perform function B at a specified port. This selection may be used for all subsequent packets of the service chain until the system determines that a path recalculation is appropriate. Such recalculation of the path may be performed periodically or triggered by packet counts, network/VM status changes, or other system metrics.
In some embodiments, a service data flow may receive all its processing steps in the rack it is assigned to. In order to decide which virtual machine performs each function, the infrastructure may query centralized load balancer 410 which may provide the virtual machine identity for the next function in the chain. A small signaling packet, different from packets of the service data flow may be sent to the centralized load balancer 410 by the leading packet of a flow so that packets of the service data flow do not move across racks except for certain instances in which a target such as a least busy virtual machine is located on a separate rack. At each stage of a chain, the infrastructure may provide the ability to direct the packets of the service data flow to the next function in the chain based on the policy provided by the centralized load balancer 410. The policies may be reused for all packets of a service data flow.
Packets in a service data flow may take the same path utilizing one or more virtual machines on the same rack. In some embodiments, the centralized load balancer may look-up and store the subsequent virtual machine in a data structure such as a flow table. A service data flow's sequence of virtual machines may be established and modified within any type of data structure such as a binary tree, a database, a table or a hash table, for example, and stored in software storage 260.
Centralized load balancer 410 may proceed to step 515 where centralized load balancer 410 may determine policy and/or performance abilities for utilizing virtual machines for each function in a service data flow. Policy may be dependent on the overall system/data center size, user requirements, system requirements and/or active status of virtual machines and blades available. Load balancer policy and/or performance considerations may prioritize utilizing all or multiple functions on the same rack in order to avoid inter-rack communication and its associated latency and delays. The centralized load balancer 410 may also determine the most suitable virtual machine for the next function in the service chain based on the determined policies or performance abilities. Centralized load balancer 410 may proceed to step 520 where centralized load balancer 410 may create a service data flow for packets in a service chain identifying a virtual machine for each function in the chain. A service data flow may be established within any type of data structure such as a binary tree, a database, a table or a hash table and stored in software storage 260, for example.
In one embodiment a service data flow table may be maintained which can be retrieved at any point such as initialization as well as in querying in method 600. Service data flow table may include hundreds-of thousands of entries for service data flows and their associated service chains that are currently being processed in the data center, have been processed recently, may be processed in the future and/or are otherwise in communication with the data center for any reason.
Centralized load balancer 410 may proceed to step 525 where centralized load balancer 410 may communicate through an SDN controller to an ingress element and/or virtual machine the first location where data should be processed. Signaling to an ingress element associated with a service data flow may occur at a top of rack switch such as top of rack switch 455, or at any other point in the data center infrastructure including any relevant virtual machine.
Centralized load balancer 410 may proceed to step 530 where centralized load balancer 410 may stop operation for that service data flow and/or packet.
In some embodiments, a small signaling packet or data type may be sent to the centralized load balancer when querying where to proceed. In some embodiments, assignment queries may occur at a virtual machine, once that virtual machine's function is done processing.
Centralized load balancer 410 may proceed to step 615 where centralized load balancer 410 may determine the most suitable virtual machine for the next function in a service chain. When determining which virtual machine the next function should be performed on in the service data flow, the centralized load balancer may account or base its decision on a policy. The policy may be dependent on the overall system/data center size, user requirements, system requirements and/or active status of virtual machines and blades available. The centralized load balancer may account for inter-rack latency and/or link utilization when determining the next virtual machine. The centralized load balancer may similarly prioritize virtual machines performing the next or other functions in the service chain on the same rack in order to prevent inter-rack latency.
In some embodiments, the centralized load balancer may consider the load on the current virtual machines. In another embodiment, the centralized load balancer may consider the topology of the virtual machines, racks and/or blades. In yet another embodiment, the centralized load balancer may have an accounting algorithm such as a round-robin packet scheduling implemented via efficient hash functions statistical multiplexing, first come first served, weighted round-robin, or a weighted scheduling system. Similarly, the centralized load balancer may simply look up the next already allocated virtual machine in the service data flow and determine the already allocated virtual machine to be the most suitable for performance of the next or subsequent function.
Centralized load balancer 410 may proceed to step 620 where centralized load balancer 410 may transmit a signaling packet from centralized load balancer 410 indicating which virtual machine to proceed to next. The signaling packet may be provided via signaling path 450. The signaling packet or information may include the port number or locator information of the next virtual machine.
In some embodiments, no querying of the centralized load balancer 410 may occur. In some embodiments a virtual machine may have virtual machines already assigned or allocated in a certain period of time. The virtual machines may continue transmitting packets in service data flows for a time period allocated by the centralized load balancer. In some embodiments, virtual machines continue transmission autonomously until centralized load balancer 410 may send a signaling packet indicating otherwise.
In step 625 the packet may be provided to the appropriate virtual function. Similarly, the virtual machine which has finished performing its operations may transmit the packet along with all the packets in the service data flow to the port and virtual machine which was indicated by the centralized load balancer.
Centralized load balancer 410 may proceed to step 630 where centralized load balancer 410 may cease operation dealing with that service data flow.
It should be apparent from the foregoing description that various exemplary embodiments of the invention may be implemented in hardware and/or firmware. Furthermore, various exemplary embodiments may be implemented as instructions stored on a machine-readable storage medium, which may be read and executed by at least one processor to perform the operations described in detail herein. A machine-readable storage medium may include any mechanism for storing information in a form readable by a machine, such as a personal or laptop computer, a server, or other computing device. Thus, a machine-readable storage medium may include read-only memory (ROM), random-access memory (RAM), magnetic disk storage media, optical storage media, flash-memory devices, and similar storage media.
It should be appreciated by those skilled in the art that any block diagrams herein represent conceptual views of illustrative circuitry embodying the principals of the invention. Similarly, it will be appreciated that any flow charts, flow diagrams, state transition diagrams, pseudo code, and the like represent various processes which may be substantially represented in machine readable media and so executed by a computer or processor, whether or not such computer or processor is explicitly shown.
Although the various exemplary embodiments have been described in detail with particular reference to certain exemplary aspects thereof, it should be understood that the invention is capable of other embodiments and its details are capable of modifications in various obvious respects. As is readily apparent to those skilled in the art, variations and modifications can be affected while remaining within the spirit and scope of the invention. Accordingly, the foregoing disclosure, description, and figures are for illustrative purposes only and do not in any way limit the invention, which is defined only by the claims.