The present disclosure generally relates to allocating data center resources in a multitenant service provider (SP) data network for implementation of a virtual data center (vDC) providing cloud computing services for a customer.
This section describes approaches that could be employed, but are not necessarily approaches that have been previously conceived or employed. Hence, unless explicitly specified otherwise, any approaches described in this section are not prior art to the claims in this application, and any approaches described in this section are not admitted to be prior art by inclusion in this section.
Placement of data center resources (e.g., compute, network, or storage) can be implemented in a variety of ways to enable a service provider to deploy distinct virtual data centers (vDC) for respective customers (i.e., tenants) as part of an Infrastructure as a Service (IaaS). The placement of data center resources in a multitenant environment, however, can become particularly difficult if a logically defined cloud computing service is arbitrarily implemented within the physical topology of the data center controlled by the service provider, especially if certain path constraints have been implemented within the physical topology by the service provider.
Reference is made to the attached drawings, wherein elements having the same reference numeral designations represent like elements throughout and wherein:
In one embodiment, a method comprises determining a stochastic distribution of received service requests for services in a data network having a prescribed physical topology; and allocating virtualized resources within the prescribed physical topology for a corresponding service request, based on the stochastic distribution.
In another embodiment, an apparatus comprises a device interface circuit configured for detecting received service requests for services in a data network having a prescribed physical topology; and a processor circuit. The processor circuit is configured for determining a stochastic distribution of the received service requests, the processor circuit further configured for allocating virtualized resources within the prescribed physical topology for a corresponding service request, based on the stochastic distribution.
In another embodiment, logic is encoded in one or more non-transitory tangible media for execution by a machine, and when executed by the machine operable for: determining a stochastic distribution of received service requests for services in a data network having a prescribed physical topology; and allocating virtualized resources within the prescribed physical topology for a corresponding service request, based on the stochastic distribution.
Particular embodiments can enable optimized placement of a service request within a physical topology of a service provider data network, based on stochastic analysis of received service requests that can provide a prediction of future demands on the physical topology by future service requests relative to existing service requests. Prior methods for allocating virtualized resources to implement a service request within a physical topology have utilized only the current (or past) state of resources in the physical topology, or the current/past state of resources in the virtualized data center, to determine how to implement a received service request. In other words, none of the prior methods of allocating virtualized resources considered the probability of future service requests, and/or the subsequent future resource utilization in a data center.
According to an example embodiment, stochastic analysis is performed on received service requests, in order to obtain a stochastic distribution of the service requests. The stochastic distribution enables a predictive analysis of future service requests relative to future resource utilization in the data center. Hence, the stochastic properties of service requests (e.g., virtual data center requests) and the resource utilization in a data center can be used to allocate virtualized resources in the physical topology (e.g., select one or more data center nodes for instantiating a new vDC request). Consequently, implementation issues such as defragmentation in multi-tenant data centers, rearranging provisioned (i.e., allocated) vDC requests due to congestion, traffic surge-based congestion, migration-based congestion, etc., can be mitigated or eliminated entirely based on applying the disclosed stochastic analysis providing a predictive analysis of future service requests relative to future resource utilization.
The apparatus 12 can be configured for implementing virtual data centers 16 for respective customers (i.e., tenants) in a multitenant environment, where virtual data centers 16 can be implemented within the service provider data network 14 using shared physical resources, while logically segregating the operations of the virtual data centers 16 to ensure security, etc. Each virtual data center 16 added to the service provider data network 14 consumes additional physical resources; moreover, logical requirements for a virtual data center 16 (either by the customer 22 or by service-provider policies) need to be reconciled with physical constraints within the service provider data network (e.g., bandwidth availability, topologically-specific constraints, hardware compatibility, etc.). Moreover, arbitrary allocation of physical resources in the service provider data network 14 for a virtual data center 16 may result in inefficient or unreliable utilization of resources.
According to an example embodiment, allocation of virtualized resources based on the stochastic distribution of received service requests enables the efficient and effective placement within the data center of the service request that logically defines virtual data center 16, in a manner that can minimize future implementation issues due to subsequent service requests (e.g., defragmentation, congestion, etc.).
As illustrated in
Although not illustrated in
Hence, the physical graph 20 can include an example inventory and attributes of the network devices in the physical topology 14, for use by the apparatus 12 in identifying feasible cloud elements based on performing stochastic-based allocation of virtualized network devices relative to the network devices based on logical constraints specified by a service request or service provider-based constraints and policies, described below.
The apparatus 12 can include a network interface circuit 44, a processor circuit 46, and a non-transitory memory circuit 48. The network interface circuit 44 can be configured for receiving, from any requestor 22, a request for a service such as a service request 42 from a customer 22. The network interface circuit 44 also can be configured for detecting received service requests 42 based on accessing a request cache storing incoming service requests received over time.
The network interface circuit 44 also can be configured for sending requests initiated by the processor circuit 46 to targeted network devices of the service provider data network 14, for example XMPP requests for configuration and/or policy information from the management agents executed in any one of the network devices of the service provider data network; the network interface 44 also can be configured for receiving the configuration and/or policy information from the targeted network devices. The network interface 44 also can be configured for communicating with the customers 22 via the wide-area network 18, for example an acknowledgment that the service request 42 has been deployed and activated for the customer 22. Other protocols can be utilized by the processor circuit 46 and the network interface circuit 44, for example IGP bindings according to OSPF, IS-IS, and/or RIP protocol; logical topology parameters, for example BGP bindings according to BGP protocol; MPLS label information according to Label Distribution Protocol (LDP); VPLS information according to VPLS protocol, Asynchronous Transfer Mode (ATM) switching, and/or AToM information according to AToM protocol (the AToM system is a commercially-available product from Cisco Systems, San Jose, Calif., that can transport link layer packets over an IP/MPLS backbone).
The processor circuit 46 can be configured for executing a Cisco Nexus platform for placement of the service request 42 into the physical topology 14, described in further detail below. The processor circuit 46 also can be configured for creating, storing, and retrieving from the memory circuit 48 relevant data structures, for example the physical graph 20, a collection of one or more service requests 42 received over time, and metadata 43 describing the physical graph 20 and/or the service requests 42, etc. As described in further detail below, the metadata 43 can include a stochastic distribution of received service requests 42, determined by the processor circuit 46. The memory circuit 48 can be configured for storing any parameters used by the processor circuit 46, described in further detail below.
Any of the disclosed circuits (including the network interface circuit 44, the processor circuit 46, the memory circuit 48, and their associated components) can be implemented in multiple forms. Example implementations of the disclosed circuits include hardware logic that is implemented in a logic array such as a programmable logic array (PLA), a field programmable gate array (FPGA), or by mask programming of integrated circuits such as an application-specific integrated circuit (ASIC). Any of these circuits also can be implemented using a software-based executable resource that is executed by a corresponding internal processor circuit such as a microprocessor circuit (not shown) and implemented using one or more integrated circuits, where execution of executable code stored in an internal memory circuit (e.g., within the memory circuit 48) causes the integrated circuit(s) implementing the processor circuit 46 to store application state variables in processor memory, creating an executable application resource (e.g., an application instance) that performs the operations of the circuit as described herein. Hence, use of the term “circuit” in this specification refers to both a hardware-based circuit implemented using one or more integrated circuits and that includes logic for performing the described operations, or a software-based circuit that includes a processor circuit (implemented using one or more integrated circuits), the processor circuit including a reserved portion of processor memory for storage of application state data and application variables that are modified by execution of the executable code by a processor circuit. The memory circuit 48 can be implemented, for example, using a non-volatile memory such as a programmable read only memory (PROM) or an EPROM, and/or a volatile memory such as a DRAM, etc.
Further, any reference to “outputting a message” or “outputting a packet” (or the like) can be implemented based on creating the message/packet in the form of a data structure and storing that data structure in a tangible memory medium in the disclosed apparatus (e.g., in a transmit buffer). Any reference to “outputting a message” or “outputting a packet” (or the like) also can include electrically transmitting (e.g., via wired electric current or wireless electric field, as appropriate) the message/packet stored in the tangible memory medium to another network node via a communications medium (e.g., a wired or wireless link, as appropriate) (optical transmission also can be used, as appropriate). Similarly, any reference to “receiving a message” or “receiving a packet” (or the like) can be implemented based on the disclosed apparatus detecting the electrical (or optical) transmission of the message/packet on the communications medium, and storing the detected transmission as a data structure in a tangible memory medium in the disclosed apparatus (e.g., in a receive buffer). Also note that the memory circuit 48 can be implemented dynamically by the processor circuit 46, for example based on memory address assignment and partitioning executed by the processor circuit 46.
The service request 42 can specify request nodes 54 (e.g., 54a, 54b, and 54c) and one or more request edges 56 (e.g., 56a, 56b, 56c, and 56d). Each request node 54 can identify (or define) at least one requested cloud computing service operation to be performed as part of the definition of the virtual data center 16 to be deployed for the customer. For example, the request node 54a can specify the cloud computing service operation of “web” for a virtualized web server; the request node 54b can specify the cloud computing service of “app” for virtualized back end application services associated with supporting the virtualized web server; the request node 54c can specify the cloud computing service of “db” for virtualized database application operations responsive to database requests from the virtualized back end services. Each request node 54 can be associated with one or more physical devices within the physical topology 14, where typically multiple physical devices may be used to implement the request node 54.
Each request edge 56 can specify a requested path requirements connecting two or more of the request nodes 54. For example, a first request edge (“vDC-NW: front-end) 56a can specify logical requirements for front-end applications for the virtual data center 16, including firewall policies and load-balancing policies, plus a guaranteed bandwidth requirement of two gigabits per second (2 Gbps); the request edge 56b can specify a requested path requirements connecting the front end to the request node 54a associated with providing virtualized web server services, including a guaranteed bandwidth requirement of 2 Gbps; the request edge 56c can specify a requested path providing inter-tier communications between the virtualized web server 54a and the virtualized back end application services 54b, with a guaranteed bandwidth of 1 Gbps; and the request edge 56d can specify a requested path providing inter-tier communications between the virtualized back and application services 54b and the virtualized database application operations 54c, with a guaranteed bandwidth of 1 Gbps. Hence, the service request 42 can provide a logical definition of the virtual data center 16 to be deployed for the customer 22.
Depending on implementation, the request edges 56 of the service request 42 may specify the bandwidth constraints in terms of one-way guaranteed bandwidth, requiring the service provider to sufficient bandwidth between physical network nodes implementing the request nodes 54. Further, the physical topology 14 may include many different hardware configuration types, for example different processor types or switch types manufactured by different vendors, etc. Further, the bandwidth constraints in the physical topology 14 must be evaluated relative to the available bandwidth on each link, and the relative impact that placement of the service request 42 across a given link will have with respect to bandwidth consumption or fragmentation. Further, service provider policies may limit the use of different network nodes within the physical topology: an example overlay constraint may limit network traffic for a given virtual data center 16 within a prescribed aggregation realm, such that any virtual data center 16 deployed within the aggregation realm serviced by the aggregation node “AGG1” 28 can not interact with any resource implemented within the aggregation realm service by the aggregation node “AGG2” 28; an example bandwidth constraint may require that any placement does not consume more than ten percent of the maximum link bandwidth, and/or twenty-five percent of the available link bandwidth.
In addition to the foregoing limitations imposed by the customer service request and/or the service provider policies, arbitrary placement of the customer service request 42 within the physical topology 14 may result in reversal of network traffic across an excessive number of nodes, requiring an additional consumption of bandwidth along each hop.
According to an example embodiment, the processor circuit 46 can determine a stochastic distribution of the received service requests 42. The processor circuit 46 also can allocate in operation 60 of
Hence, the processor circuit 46 can use predictive analysis to allocate virtualized resources 50. As illustrated in
Referring to
The processor circuit 46 in operation 72 can identify each attribute of each service request 42, as described above with respect to
The processor circuit 46 in operation 76 of
Example allocations in operation 78 based on the stochastic distribution 43 can include the processor circuit 46 allocating virtualized resources 50 based on a probability function “P(vDC=X)” (80a of
Hence, the allocations in operation 78 based on the stochastic distribution 43 of service requests 42 can enable efficient deployment of virtual data centers 16, without the necessity of modifying or moving provisioned virtual data centers 16 due to an increase in service requests 42 or consumed resources. Further, system overhead can be reduced by mitigating fragmentation of resources that need to be reclaimed after termination of a virtual data center 16. The allocation based on different service levels also can enable an improvement in revenue by favoring higher service level requests (e.g., gold over platinum) based on the predictions from the stochastic distribution.
The processor circuit 46 also can initiate in operation 84 a service order (e.g., to a Provisioning Manager) to change the physical topology 14 based on the stochastic distribution 43, for example based on detecting that the current physical topology (as represented by the physical graph 20) needs to be modified to better suit future vDC requests 42.
According to the example embodiments, cloud resource placement in a physical topology of a service provider data network can be optimized based on a determined stochastic distribution of the received service requests, enabling the mitigation or elimination of fragmentation, congestion, re-provisioning due to congestion, etc.
While the example embodiments in the present disclosure have been described in connection with what is presently considered to be the best mode for carrying out the subject matter specified in the appended claims, it is to be understood that the example embodiments are only illustrative, and are not to restrict the subject matter specified in the appended claims.