The present disclosure relates to advertising capabilities and resources and routing service requests in a cloud computing system.
“Cloud computing” can be defined as Internet-based computing in which shared resources, software and information are provided to client or user computers or other devices on-demand from a pool of resources that are communicatively available via the Internet. Cloud computing is envisioned as a way to democratize access to resources and services, letting users efficiently purchase as many resources as they need and/or can afford.
In a cloud computing environment, numerous cloud service requests are serviced in relatively short periods of time. The cloud services consist of any combination of the following: compute services, network services, and storage services. Examples of network services include L2 (VLANs) or L3 (VRFs) connectivity between various physical and logical elements in the data center, L4-L7 services including firewalls and load balancers, QoS, ACLS, and accounting. In such an environment, it is highly beneficial to automate placement and instantiation of cloud services within and between data centers, so that cloud service requests can be accommodated dynamically with minimal (preferably no) human intervention.
Overview
Systems and methods are provided for receiving at a provider edge routing device capabilities data representative of capabilities of computing devices disposed in a data center, the capabilities data having been published by an associated local data center edge device, and advertising, by the provider edge routing device, the capabilities data to other provider edge routing devices in communication with one another in a network of provider edge routing devices. The provider edge routing device also receives respective capabilities data from each of the other provider edge routing devices, wherein each of the other provider edge routing devices is associated with a respective local data center via a corresponding data center edge device, and stores all of the capabilities reports in a directory of capabilities. Thereafter, a request for computing services is received at the provider edge network and the methodology provides for selecting, based on the directory of capabilities, one of the data centers to fulfill the request for computing services to obtain a selected data center, and routing the request for computing services to the selected data center.
Each Data Center 131, 132 (and using Data Center 131 as an example) may comprise DC Edge routers 133, 134 (as mentioned), a firewall 138, and a load balancer 139. These elements operate together to enable “pods” 151(1)-151(n), 152(1), etc., which respectively include multiple cloud resource devices 190(1)-190(3), 190(4)-190(7), 190(8)-190(11), to communicate effectively through the network topology 100 and provide computing and storage services to, e.g., clients 110, which may be other Data Centers or even stand alone computers. In a publish-subscriber system, which is one way to implement such a cloud computing environment, clients 110 are subscribers to requested resources and the cloud resource devices 190(1)-190(3), 190(4)-190(7), 190(8)-190(11) (which publish their services, capabilities, etc.) are the ultimate providers of those resources, although the clients themselves may have no knowledge of which specific cloud resource devices actually provide the desired service (e.g., compute, storage, etc.).
Still referring to
Further still, servers within a pod may be grouped together in what are called “clusters or cluster pools.” For example, if there are 100 physical servers in a pod, then they can be divided into four clusters each comprising 25 physical servers. Physical resources are shared within a cluster for load distribution, failure handling, etc. The notion of clusters may be viewed as a fourth hierarchical level (in addition to the pod level, data center level and provider edge level). The cluster level is subordinate to the pod level.
It is envisioned that there are some deployments that do not use all three (or even four) hierarchical levels (cluster, pod, data center and provider edge). For example, it is envisioned that the techniques described herein may be employed where there only two levels, e.g., data center level and provider edge level, where a data center is effectively viewed as one pod. In another example, the techniques described herein are employed for four levels: provider edge, data center, pod and cluster.
Cloud resource devices 190 themselves may be web or application servers, storage devices such as disk drives, or any other computing resource that might be of use or interest to an end user, such as client 110.
Processors 210/310 may be programmable processors (microprocessors or microcontrollers) or fixed-logic processors. In the case of a programmable processor, any associated memory (e.g., 220, 320) may be of any type of tangible processor readable memory (e.g., random access, read-only, etc.) that is encoded with or stores instructions that can implement the Attribute Summarization Logic 230, 330. Alternatively, processors 210, 310 may be comprised of a fixed-logic processing device, such as an application specific integrated circuit (ASIC) or digital signal processor that is configured with firmware comprised of instructions or logic that cause the processor to perform the functions described herein. Thus, Attribute Summarization Logic 230, 330 may be encoded in one or more tangible media for execution, such as with fixed logic or programmable logic (e.g., software/computer instructions executed by a processor) and any processor may be a programmable processor, programmable digital logic (e.g., field programmable gate array) or an ASIC that comprises fixed digital logic, or a combination thereof. In general, any process logic may be embodied in a processor or computer readable medium that is encoded with instructions for execution by a processor that, when executed by the processor, are operable to cause the processor to perform the functions described herein.
As noted, there can be many different types of cloud resource devices 190 in a given network including, but not limited to, compute devices, network devices, storage devices, service devices, etc. Each of these devices can have a different set of capabilities or attributes and these capabilities or attributes may change over time. For example, a larger capacity disk drive might be installed in a given storage device, or an upgraded set of parallel processors may be installed in a given compute device. Furthermore, how a cloud, particularly one that operates consistent with a publish-subscribe model, might view or present/advertise these capabilities or attributes in aggregate to potential subscribers may vary from one capability or attribute type to another.
More specifically, in one possible implementation of a cloud computing infrastructure like that shown in
In one embodiment, the capabilities or attributes published by devices (e.g., cloud resource devices 190) in a domain at the lowest layer of the network hierarchy (e.g., within pod 151) are summarized/aggregated into a common set of capabilities associated with the entire domain. Thus, referring again to
In an embodiment, each device can advertise (publish) its capabilities or attributes on a common control plane. Such a control plane could be implemented using a presence protocol such as XMPP (eXtensible Markup Presence Protocol), among other possible protocols or mechanisms that enable devices to communicate with each other.
Significantly, and in an effort to maintain a certain level of automation in the attribute summarization process, not only is a given attribute published or advertised, but an extensible aggregation function is provided along with that given attribute that enables the device that is publishing the attributes to specify the manner in which the attribute should be treated/aggregated or summarized at a next higher level in the network hierarchy. Extensibility in this context is desirable as different attributes may need to be summarized differently. For example, depending on the type of attribute, the attribute may be summarized with other like attributes of other devices via primitives such as concatenation, addition, selection of a lesser of values, etc. In one implementation, the Attribute Summarization Logic 230/330 may provide and/or support a comprehensive list of primitive aggregation functions (e.g., SUM, MULTIPLY, DIFFERENCE, AVERAGE, STANDARD DEVIATION, CONCATENATION, LENGTH, LESSER_OF, GREATER_OF, MAX, MIN, UNION, INTERSECTION, etc.), and the devices can then specify which one of (or combination of) the primitive functions to use when the attributes of a given device are to be summarized. The selection of a primitive aggregation function could be performed automatically, or may be performed manually by an administrator.
Still with reference to
In light of the foregoing, those skilled in the art will appreciate that the Attribute Summarization Logic 230 enables each device to independently determine the attributes that it would like to advertise or publish. The Attribute Summarization Logic 230 also enables the device to provide metadata about those attributes. This approach allows for attributes, which are not a priori known or understood by a next higher node carrying out the summarization function, to still be intelligently summarized/aggregated and then published at a still next layer up in the hierarchy. In one possible implementation, cloud resource devices 190 could provide customers with the ability to configure their own attributes that are not understood by the devices themselves, but are intelligently summarized/aggregated and published up the hierarchy, then referenced in customer policies for hierarchical rendering and provisioning of services.
The following is another example of how the Attribute Summarization Logic 230 may operate. Consider an example of advertising “compute” power through the network hierarchy. Each cloud resource device can advertise the number of cores it has available along with the operating frequency of each core. For example, Device A advertises 4C@1.2 Ghz, Device B advertises 4C@1.2 Ghz, and Device C advertises 4C@2.0 Ghz. Each of these cloud resource devices will publish this information to a first logical hop, e.g., aggregation node 160. At that node Attribute Summarization Logic 330 might aggregate or summarize the received information into one advertisement of “8C@1.2 Ghz, 4C@2.0 Ghz.” In contrast, a traditional publish-subscribe system might have simply sent or forwarded the three originally received individual advertisements. Note that, in this case, the summarization is not a simple summing operation, but is instead a function. Such a function can make use of one or more operations, including but not limited to SUM, MULTIPLY, DIFFERENCE, AVERAGE, STANDARD DEVIATION, CONCATENATION, LENGTH, LESSER_OF, GREATER_OF, MAX, MIN, UNION, INTERSECTION, among others.
In this particular example, the function underlying summarization is: compare the frequency, and if they are equal then add the number of cores.
More specifically, consider that the elements are arranged in a <key, value> array, where key is the operating frequency and the value is the number of cores. That is, and referring again to
That is, for each core having a given operating frequency, add that core to a running total. In this way, a next higher node in the network hierarchy can efficiently summarize attributes, or even combinations of attributes of nodes from a next lower level in the network hierarchy.
Those skilled in the art will appreciate that more complex operations might be implemented. For instance, it might be desirable to consider multiple dimensions including, e.g., memory, storage, processor type (PPC, X86, ARM, 32 bit, 64 bit etc.), connectivity, bandwidth, etc. All such attributes can be summarized consistent with instructions or functions delivered in the metadata (which might even include an explicit equation) that is provided along with the attributes in a message like that shown in
Another example of a summarization function is “intersection,” as noted above. For example, it may be desirable to determine the intersection of routing protocols supported in a routing domain across different routers. Consider the following:
Router 1 supports: BGP (Border Gateway Protocol), OSPF (Open Shortest Path First), RIP (Routing Information Protocol), ISIS (Intermediate System to Intermediate System); summarization operator (function)=intersection.
Router 2 supports: BGP, RIP, ISIS; summarization operator (function)=intersection.
Summarized information according to intersection would be: BGP, RIP, ISIS.
Intersection may be a useful function in that all routers in a given routing domain should communicate via the same protocol.
It is apparent that any attempt to aggregate multiple resources from within a given domain into one set of resource values to be advertised to the next higher domain can result in loss of information. There is an inherent tradeoff whenever summarization is introduced: scale is improved, but accuracy is decreased due to loss of detailed information. “Resource groups” are one tool that can help improve the accuracy in representing resources to higher layers in the hierarchy, at the expense of increased amounts of information.
For example, it is not possible to accurately aggregate the following capabilities into only one processing capacity value and one value for available bandwidth:
A conservative approach would advertise 2 GHz processing capacity with 500 Mbps available bandwidth. Requests to a Data Center control point for more than 2 GHz processing capacity that only require 500 Mbps available bandwidth would not be directed, however, to a pod having the above published summarization.
On the other hand, an aggressive approach might result in advertising 10 GHz processing capacity with 2 Gbps available bandwidth. Requests for more than 2 GHz processing capacity along with more than 500 Mbps available bandwidth may still be directed towards the pod, even though such a combination cannot be supported. The pod control point would have to reject this request, leaving the Data Center control point to select a different pod.
In order to advertise such combinations more accurately, the notion of a resource group can be introduced. The combination of capabilities above can be accurately represented by advertising two resource groups for the same network element. One resource group can reflect the combination of 2 GHz processing capacity and 2 Gbps available bandwidth. The other resource group can reflect the combination of 10 GHz processing capacity and 500 Mbps available bandwidth.
Thus, a resource group can be considered a collection of disparate resources collected together into one container for the purposes of accounting and consumption. A particular resource may be merged into one or more resource groups and the composition (which resource types/attributes are aggregated) of a given resource group may change at run-time. New resource groups can be created while the system is in operation.
The publishers of the information may not be aware of resource groups at all or of which resource group they will be a part, as any association into resource groups is performed as the resource advertisements are received and analyzed at next higher levels within the network hierarchy or, more generally, at different nodes not necessarily arranged in a hierarchy.
As an example, suppose the following Resource Group Templates are defined by an administrator:
Now consider cloud resource devices with the following published advertisements:
When the advertisements arrive at a next higher level node the node can export three resource groups, namely:
Then, at step, 620, a function that defines how the attribute is to be summarized together with a same attribute of a second network device is selected. The function could, for example, be any one of count, sum, multiply, divide, difference, average, standard deviation or concatenate and even include a more elaborate equation or program. At step 630, a message is generated that comprises a tuple (or set of information) comprising an identification of the attribute and the function, and then at step 640, the message is sent to a next higher node in a network hierarchy of which the network device is a part. In an embodiment, the message is sent using a presence protocol such as XMPP. Although not required, the first and the second network device may be at a same level within the network hierarchy such that a next higher node in the network hierarchy can receive a plurality of such messages and summarize the attributes of lower level entities. The messages may also be publish or advertisement messages within a publish-subscribe system.
As shown, at step 710, at, e.g., an aggregation node of a data center comprising a plurality of network devices, a first publish message from a first network device is received, and the first publish message from the first network device includes a first tuple (or set of information) having a form (attribute1, metadata1), wherein a given attribute describes a capability of the first network device. At step 720, at, e.g., the same aggregation node of the data center, a second publish message from a second network device is received, and the second publish message from the second server includes a second tuple (or set of information) having the form (attribute2, metadata2). At step 730, a third tuple (or set of information) is generated by combining information in the first tuple and the second tuple consistent with functions defined by the metadata, and at step 740, a third publish message is sent to a next higher aggregation node in a hierarchical structure of which the aggregation node is a member, the third publish message comprising the third tuple.
As explained, the summarizing node can also generate resource groups that combine and summarize attributes from multiple network devices in different ways. Thus, the first publish message and the second publish message may each comprise a plurality of attributes and respective metadata, and the overall methodology may further generate a plurality of groupings (resource groups) that summarize and combine the attributes in different ways to satisfy, perhaps, predetermined templates.
In order to make intelligent placement decisions in a cloud computing system, it is highly beneficial to expose the capabilities and resources of all cloud elements (compute, network, and storage) to the resource managers that make the cloud services placement decisions. The goal is to minimize instantiation failures and retries due to insufficient resources or capabilities at individual cloud elements, while accommodating all cloud service requests for which sufficient available resources and capabilities exist.
Advertisement of capabilities and resources of all cloud elements should be done in a manner that exposes sufficient detail for resource managers to accurately place cloud services. However, these advertisements should be constrained so that the solution scales to numerous very large data centers with hundreds of thousands of servers, without overwhelming the Cloud Control Plane that receives and processes the advertisements.
Turning to
The resources and capabilities that are advertised span compute, network (service node), and storage devices, including dynamic capacities that fluctuate as cloud service requests come and go and also fluctuate due to varying traffic loads. A resource and capability database is maintained in a distributed and node fault-tolerant manner.
Capabilities advertisement is carried out by constructing a hierarchical tree of advertisement domains, also called advertisement levels or layers, as shown in
The lowest level of the hierarchy is typically the POD, e.g., PODs 151(1)-151(n) and 152(1) shown in
Thus, for POD 1.1 shown in
The aggregation nodes 160(1)-160(n) running the servers for the POD advertisement domain or level, generate the POD level Capabilities Directory data that summarizes the POD level inventory and propagates that data to a designated device at the next level up in the advertisement hierarchy, which is typically the Data Center level. In other words, the aggregation nodes 160(1)-160(n) send messages advertising their POD level capabilities summary data to a designated device of their corresponding data center, e.g., to Data Center edge node 133(1), e.g., an edge switch, in the example shown in
Each Data Center edge node receives the messages advertising the POD level capabilities summary data from the aggregation nodes of each constituent POD and generates a Data Center Level Capabilities Directory. The Data Center Level Capabilities Directory comprises data center level capabilities summary data that summarizes the capabilities for all PODs for that data center without exposing individual compute, storage and service node devices in each POD and well as individual resources at the data center level, i.e., those that are not included in any of the PODs. For example, Data Center edge node 133(1) generates a Data Center Level Capabilities Directory that indicates the aggregate VMs, storage capacity, bandwidth, FW, SLB for Data Center 1 and Data Center edge node 133(k) generates a Data Center Level Capabilities Directory that indicates the aggregate VMs, storage capacity, bandwidth, FW, SLB for Data Center k.
The resulting Data Center Level Capabilities Directory describes the aggregate POD capabilities such as compute, L4-L7 services, and storage advertised for a POD to the data center level are associated with the POD as a whole. Individual servers, appliances, and switches within the POD are not exposed at the data center level. Not “exposing” individual devices at the data center level means that the Data Center Level Capabilities Directory data does not specifically identify or refer to a particular device, e.g., server 190(1) in POD 151(1), that has a certain compute capacity (e.g., VM capacity). Rather, the capacity of any given component, e.g., server 190(1), is reflected in the summary data. Thus, the data center level capabilities summary data does not specifically refer to or identify any particular compute, storage or service node device in any of the PODs. Examples of data center level capabilities are data center edge switches, perimeter firewalls, inter-POD load balancers, intrusion detection systems, wide area network (WAN) acceleration services, etc. Furthermore, switches and other appliances that reside outside of the PODs are advertised individually at the data center level, including interfaces, so that the data center level topology can be derived.
The nodes running the servers for the data center advertisement domain summarize the data center level inventory and propagate that to the servers for the provider edge network level, also referred to herein as the Next Generation Network (NGN) advertisement domain. The NGN level is also referred to as the provider edge (PE) level. That is, the Data Center edge nodes 133(1)-133(k) send messages advertising their capabilities summary data to a designated device at the provider edge network or NGN level. Like that for the POD level, the aggregate data center capabilities such as compute, L4-L7 services, and storage capabilities are advertised as being associated with a given data center as a whole. Individual servers, appliances, and switches within the data center are not exposed at the provider edge network or NGN level, similar to that described above for the data center level. Switches that reside outside of the data centers are advertised individually at the data center level, including interfaces so that the NGN level topology can be derived. Thus, at a designated device at the provider edge network level, e.g., provider edge node 125, provider edge network level capabilities summary data is generated that summarizes the capabilities of compute, storage and network devices within each data center as a whole without exposing individual compute, storage and service node devices in each data center. Thus, like the data center level capabilities summary data, the provider edge network level capabilities summary data summarizes the capabilities for all PODs within a given data center and without specifically referring to or identifying any particular compute, storage or service node device in any of the PODs of any of the data centers. Examples of provider edge network level capabilities summary data are types and numbers of virtual private networks (VPNs) supported, proximity information (network distance between customer data center and service provider data center), performance of the connection between two data centers such as delay, jitter, packet loss etc., number of virtual routers/forwarders supported by the PE routers.
Reference is now made to
When the servers within a data center are grouped into clusters such that each pod comprises a plurality of clusters of compute devices, then the designated device, e.g., the logic 800 of the aggregation node is further configured to receive advertising messages that advertises capabilities of each cluster of computer devices in the corresponding pod and to generate the pod level capabilities summary data to include data representing the capabilities of each cluster of computer devices in the corresponding pod. When server clusters are employed, the pod level capabilities summary data may include cluster capabilities data without exposing (that is, without specifically referring to or identifying) individual compute devices.
Turning now to
Operation of the Data Center Level Capabilities Advertisement Process Logic 1000 of a data center edge node is now described in connection with the flow chart shown in
As explained above, in one example, the techniques described herein are used for two hierarchical levels: data center level and provider edge level. In this case, each data center is viewed as effectively one large pod. Thus, in this example scenario, data center level capabilities data is generated the summarizes the capabilities of the data center, messages advertising the data center level capabilities summary data is sent from each data center to a designated device at the provider edge network level.
Operation of the Provider Edge Level Advertisement Capabilities Process Logic 1200 is now described with reference to
Techniques are described herein for hierarchical advertisement of resources and capabilities within and between data centers. Above the lowest level of the hierarchy (e.g., the POD level), aggregated/summarized resources and capabilities are associated with entire child (POD level) domains, without exposing individual elements within the child domain to higher level domains (e.g., data center level and provider edge network level) in the hierarchy.
These techniques utilizes a “push” or “publish/subscribe” approach to discovery of resource and capabilities that scales much better than other network management approaches, e.g., those that involve polling. This allows for use across cloud computing networks comprising numerous data centers with hundreds of thousands of servers per data center. Although one implementation described herein involves three levels of hierarchy as described above (POD, Data Center, and Provider Edge/NGN), this mechanism allows for an arbitrary number of hierarchical levels, allowing customers to control the tradeoff between accuracy and scalability.
In addition, these techniques allow for tracking of dynamic capacities that fluctuate as cloud service requests come and go and also fluctuate due to varying traffic loads. Cloud elements can control their own resource allocation and utilization, as opposed to centralized resource control where all accounting and decision making is centralized at network management stations. Cloud elements do not need to be dedicated exclusively one particular network management station, increasing flexibility and avoiding synchronization problems between cloud elements and network management stations.
In summary, in a computing system comprising a plurality of data centers, each data center comprising a plurality of compute, storage and service node devices, a method is provided comprising: generating data center level capabilities summary data that summarizes the capabilities of the data center; sending messages advertising the data center level capabilities summary data from a designated device of each data center to a designated device at a provider edge network level of the computing system; and at the designated device at the provider edge network level, generating provider edge network level capabilities summary data that summarizes capabilities of compute, storage and network devices for each data center as a whole and without exposing individual compute, storage and service node devices in each data center.
Similarly, provided herein in another form is one or more computer readable storage media encoded with software comprising computer executable instructions and when the software is executed operable to: generate data center level capabilities summary data that summarizes the capabilities of a data center in a computing system comprising a plurality of data centers; and send messages advertising the data center level capabilities summary data to a designated device at a provider edge network level of the computing system.
Further still, in other form, an apparatus is provided comprising a network interface unit configured to communicate over a network; and a processor. The processor is configured to configured to: generate data center level capabilities summary data that summarizes the capabilities of a data center in a computing system comprising a plurality of data centers, each data center comprising compute, storage and service node devices; and send messages advertising the data center level capabilities summary data to a designated device at a provider edge network level of the computing system.
Moreover, a system is provided comprising a plurality of data centers, each data center comprising a plurality of compute, storage and service node devices; and a designated device of each data center configured to: generate data center level capabilities summary data that summarizes the capabilities of the data center; send messages advertising the data center level capabilities summary data to a designated device at a provider edge network level that is in communication with the designated devices for the respective data centers; and wherein the designated device at the provider edge network level is configured to: generate provider edge network level capabilities summary data that summarizes capabilities of compute, storage and network devices for each data center as a whole and without exposing individual compute, storage and service node devices in each data center.
Capabilities Based Routing
As explained above, the Provider Edge Level Capabilities Directory data 1205 comprises summary data that summarizes the capabilities of computer, storage and network devices for each data center 131, 132 as a whole without exposing individual computer, storage and service node devices in each data center. As will be explained next, this Provider Edge Level Capabilities Directory data 1205, when leveraged in an appropriate manner, can facilitate the efficient routing of cloud user requests to a selected data center.
More specifically, in present cloud computing environments, to locate a service in a cloud, individual data centers are polled, or centralized control is used. That is, at the time of placement of a resource request different distribution centers (or data centers) are polled to see if the service can be placed there. This is not an efficient scheme as the provisioning entity will have to poll all possible distribution centers in order to find the best possible location. Alternatively, a centralized management entity can maintain a database of all the capabilities in all the data centers of a cloud service provider. Such a database is generally populated in a manual fashion and it is extremely hard to keep accurate in real time. Significantly, these two approaches are not scalable as the size of the cloud increases since polling message exchanges will increase with the size of the cloud, and maintaining a centralized database of all capabilities, especially if manually maintained, quickly becomes unmanageable.
Explained with reference to
The second function of the Provider Edge Level Sharing and Routing Logic Process Logic 1400 is to leverage the respective collections of capabilities data or capabilities summary data at each Provider Edge node 125 such that when a user request for cloud resources is received at a given one of the Provider Edge nodes 125, that user request can be efficiently routed to a data center having the appropriate resources available to serve the request. In other words, instead of having to poll each data center individually, each Provider Edge node 125 already is aware of the capabilities (perhaps in summary form) of each data center that is available via the cloud, or network 100. In addition, there is no single repository of the capabilities of each data center, but rather each Provider Edge node 125 is aware of the capabilities of each data center, and is, in accordance with capabilities publish/advertising schemes described herein, continuously updated with the capabilities of the data centers throughout the cloud, or network 100.
As has been noted, a cloud computing network, such as network 100, may consist of hundreds of data centers 131, 132 with thousands of individual devices (e.g., 138, 139, 160, 178, 179, 180, 190) providing various services such a compute, storage, firewalls, load balancers, Service Wire, Network Address Translation (NAT), etc. All of these data centers are inter-connected with a Service Provider's network (e.g., top level network 120) containing thousands of routing nodes (e.g., Provider Edge nodes 125) spanning to multiple geographies around the globe.
Most of these services are scattered around various data centers, whereas some of the specialized services may be hosted on specific data centers for economical and business reasons. End users (or consumers) of these services may make a service request from anywhere in the network. Such a request may be considered a virtual data center service request since the user or client 110 is not aware of which data center might ultimately serve or fulfill the request. Typically, to place such a request, the management systems have to maintain centralized inventories of the entire cloud resources/capabilities. This is not only a massive scale issue, but from a practical perspective, the accuracy of maintaining such an inventory in real time is not easily achievable.
The presently described approach uses the network 100, and particularly top level network 120 to solve this massive scale problem. The intelligence is built into the network devices as well as service nodes to publish their capabilities into the network. As explained herein, these capabilities are aggregated at various hierarchical layers and data centers 131, 132. The actual or an abstract (or aggregated) view of these capabilities is published into the network by each of the Data Center edge routers 133-136. This information is published to the Provide Edge (PE) node 125, and is then distributed across the network. Every Provider Edge node 125 in the network has a directory (Provider Edge Level Capabilities Directory data 1205) of all the capabilities supported by all data centers in the network. This capability directory can be updated in real time as the capabilities in individual data centers change or are modified. For example, a device 190 may fail or certain capabilities may be consumed by other users. Capabilities updates are made by data centers by “pushing” any change in capabilities up though the network hierarchy, as described herein.
In one possible implementation, the capabilities are pushed only if there is a significant change—thereby, making it a very scalable solution—as opposed to continuous polling of such capabilities. For example, updated capabilities data may only be advertised if, for instance, more than a 10% change (plus or minus) in available resources is detected by a Data Center Edge node 133-136.
When a user request is initiated anywhere in the network, the Provider Edge node 125 closest to the request (e.g., that is, perhaps, first aware of the request) looks at or queries the Provider Edge Level Capabilities Directory data 1205, which is a collection of all data center capabilities and maps the requested capabilities to the “best suited” Data Center and the service is routed to that Data Center.
Once the Data Center receives the routed request, the Data Center provisions the resources and may, as a result, need to republish its then-current capabilities back up through the network 100 hierarchy.
Reference is now made to
Reference is now made to
With the foregoing in mind, the embodiments related to sharing capabilities data among provider edge nodes and routing service requests using that information has several advantages.
First, the described approach is highly scalable. The service requesting management entity does not have to poll hundreds of data centers and keep a massive capability directory. Instead, abstracted and normalized capabilities can be distributed across the network and assessable from anywhere in the network.
Second, the described approach leads to more accuracy. Since the change in capabilities are advertized to the network on a real time basis, an accurate view of the capabilities is available all the time. Failure of one or more devices/routers in the network does not prevent the distribution of the information throughout the network.
Third, the instant methodology leads to higher efficiency. That is, when a service request is instantiated, the service routing decisions are made locally on the node where the request is originated (or first received).
Fourth, the approach described herein is distributed. Specifically, since the information is distributed in the network, there are no issues with single (or multipoint) failures in the network.
Although the apparatus, system and method are illustrated and described herein as embodied in one or more specific examples, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made therein without departing from the scope of the apparatus, system, and method and within the scope and range of equivalents of the claims. A Data Center can represent any location supporting capabilities enabling service delivery that are advertised. A Provider Edge Routing Node represents any system configured to receive, store or distribute advertised information as well as any system configured to route based on the same information. Accordingly, it is appropriate that the appended claims be construed broadly and in a manner consistent with the scope of the apparatus, system, and method, as set forth in the following.