The present disclosure relates generally to methods for adaptive resource dimensioning for a cloud stack having a plurality of virtualization layers, and related methods and apparatuses.
On the other hand, with cloud technologies (e.g., virtualization) as the enabler, each site may be equipped with multiple virtualization layers (also referred to as multi-layer cloud stacks), such as e.g. OpenStack (OS) and/or Kubernetes (K8S). Workloads can be hosted in different virtualization layers.
For automation of multi-layer cloud stacks, there currently exist certain challenges for VM bin packing. Challenges include a recurring bin packing problem where items of different size are to be packed into other items of flexible sizes, then in other items of flexible sizes, and so on until an item with a fixed size has been reached. Solutions to such a recurring bin packing problem are lacking.
Certain aspects of the disclosure and their embodiments may provide solutions to these or other challenges. A method for adaptive resource dimensioning for a cloud stack having a plurality of virtualization layers with resource dependencies between the plurality of virtualization layers in an infrastructure is provided. The method may provide a solution to a recurring bin packing problem.
In various embodiments, a method is provided performed by a network node for adaptive resource dimensioning for a cloud stack having a plurality of virtualization layers with resource dependencies between the plurality of virtualization layers in an infrastructure. The method comprises assigning a workload to a bin in the infrastructure based on one of (i) assign the workload to the bin when the bin has a resource capacity that supports a dimension of the workload, the bin comprising one of an existing bin and an existing sub-bin within the bin; and (ii) when the resource capacity of the bin is insufficient to support the dimension of the workload (a) create a new bin, the new bin having a dimensioned resource capacity that supports the dimension of the workload, and (b) assign the workload to the new bin having the dimensioned resource capacity that supports the dimension of the workload. The method further comprises outputting information representing one of the assigned bin and the assigned new bin for the workload, and a corresponding resource capacity of the assigned bin or the dimensioned resource capacity of the new bin.
In other embodiments, a network node for adaptive resource dimensioning for a cloud stack having a plurality of virtualization layers with resource dependencies between the plurality of virtualization layers in an infrastructure is provided. The network node includes at least one processor; and at least one memory connected to the at least one processor and storing program code that is executed by the at least one processor to perform operations. The operations include to assign a workload to a bin in the infrastructure based on one of (i) assign the workload to the bin when the bin has a resource capacity that supports a dimension of the workload, the bin comprising one of an existing bin and an existing sub-bin within the bin; and (ii) when the resource capacity of the bin is insufficient to support the dimension of the workload (a) create a new bin, the new bin having a dimensioned resource capacity that supports the dimension of the workload, and (b) assign the workload to the new bin having the dimensioned resource capacity that supports the dimension of the workload. The operations further comprise to output information representing one of the assigned bin and the assigned new bin for the workload, and a corresponding resource capacity of the assigned bin or the dimensioned resource capacity of the new bin.
In other embodiments, a network node for adaptive resource dimensioning for a cloud stack having a plurality of virtualization layers with resource dependencies between the plurality of virtualization layers in an infrastructure is adapted to perform operations. The operations include to assign a workload to a bin in the infrastructure based on one of (i) assign the workload to the bin when the bin has a resource capacity that supports a dimension of the workload, the bin comprising one of an existing bin and an existing sub-bin within the bin; and (ii) when the resource capacity of the bin is insufficient to support the dimension of the workload (a) create a new bin, the new bin having a dimensioned resource capacity that supports the dimension of the workload, and (b) assign the workload to the new bin having the dimensioned resource capacity that supports the dimension of the workload. The operations further comprise to output information representing one of the assigned bin and the assigned new bin for the workload, and a corresponding resource capacity of the assigned bin or the dimensioned resource capacity of the new bin.
In other embodiments, a computer program comprising program code to be executed by processing circuitry of a network node is provided, whereby execution of the program code causes the network node to perform operations. The operations include to assign a workload to a bin in the infrastructure based on one of (i) assign the workload to the bin when the bin has a resource capacity that supports a dimension of the workload, the bin comprising one of an existing bin and an existing sub-bin within the bin; and (ii) when the resource capacity of the bin is insufficient to support the dimension of the workload (a) create a new bin, the new bin having a dimensioned resource capacity that supports the dimension of the workload, and (b) assign the workload to the new bin having the dimensioned resource capacity that supports the dimension of the workload. The operations further comprise to output information representing one of the assigned bin and the assigned new bin for the workload, and a corresponding resource capacity of the assigned bin or the dimensioned resource capacity of the new bin.
In other embodiments, a computer program product comprising a non-transitory storage medium including program code to be executed by processing circuitry of a network node is provided, whereby execution of the program code causes the network node to perform operations. The operations include to assign a workload to a bin in the infrastructure based on one of (i) assign the workload to the bin when the bin has a resource capacity that supports a dimension of the workload, the bin comprising one of an existing bin and an existing sub-bin within the bin; and (ii) when the resource capacity of the bin is insufficient to support the dimension of the workload (a) create a new bin, the new bin having a dimensioned resource capacity that supports the dimension of the workload, and (b) assign the workload to the new bin having the dimensioned resource capacity that supports the dimension of the workload. The operations further comprise to output information representing one of the assigned bin and the assigned new bin for the workload, and a corresponding resource capacity of the assigned bin or the dimensioned resource capacity of the new bin.
The accompanying drawings, which are included to provide a further understanding of the disclosure and are incorporated in and constitute a part of this disclosure, illustrate certain non-limiting embodiments of this disclosure. In the drawings:
The present disclosure will now be described more fully hereinafter with reference to the accompanying drawings, in which examples of embodiments of this disclosure are shown. Embodiments of the present disclosure may, however, be embodied in many different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of present embodiments to those skilled in the art. It should also be noted that these embodiments are not mutually exclusive. Components from one embodiment may be tacitly assumed to be present/used in another embodiment.
The following description presents various embodiments of the disclosed subject matter. These embodiments are presented as teaching examples and are not to be construed as limiting the scope of the disclosed subject matter. For example, certain details of the described embodiments may be modified, omitted, or expanded upon without departing from the scope of the described subject matter.
While hosting of workloads in different virtualization layers may bring flexibility for service provisioning, it also increases the complexity of lifecycle management (LCM) during service instance design and assignment. For example, cloud and networking resources can be overprovisioned due to a lack of an automated mechanism to scale up/down or in/out of cloud and networking resources according to their use. There is a need for adaptive dimensioning of cloud layers to smooth (e.g., significantly smooth) the process of service provisioning and better utilization of virtualization layers resources.
An existing approach for resource dimensioning places mobile edge computing (MEC) nodes at each candidate location. See e.g., P. Zhao and G. Dan, “Joint Resource Dimensioning and Placement for Dependable Virtualized Services in Mobile Edge Clouds,” IEEE Transactions on Mobile Computing, DOI 10.1109/TMC.2021.3060118 (18 Feb. 2021) (“Zhao”). Zhao references another approach to compute placement of primary and secondary instances for virtualized services (VSs) over the MEC nodes. Resource dimensioning discussed in Zhao involves determining the location, the number of MEC nodes, and the amount of MEC resources to be deployed.
Another approach regarding resource dimensioning includes a framework to monitor and dynamically dimension resources during the execution of parallel workflows in clouds. See e.g., Coutinho, R., Frota, Y., Ocaña, K. et al., “A Dynamic Cloud Dimensioning Approach for Parallel Scientific Workflows: A Case Study in the Comparative Genomics Domain”, J Grid Computing 14, 443-461 (2016) (“Coutinho”). Coutinho describes monitoring the resources usage of VMs and estimating the number of VMs to instantiate for workflows execution.
In another approach regarding resource dimensioning, a service is described based on a multi-objective cost function to determine an initial configuration for a virtual cluster. See e.g., Daniel de Oliveira, Vitor Viana, Eduardo Ogasawara, Kary Ocana, and Marta Mattoso, “Dimensioning the virtual cluster for parallel scientific workflows in clouds,” Proceedings of the 4th ACM workshop on Scientific cloud computing (Science Cloud '13). Association for Computing Machinery, New York, NY, USA, 5-12, 2013 (“Oliveira”). Oliveira describes a decision based on workflow characteristics with budget and deadline constraints; and defining an optimal/near-optimal number of VMs according to the type of VMs provided by the cloud. The dimensioning is done before the workflow execution in the cloud.
VM packing, bin packing, and knapsack packing will now be discussed.
A goal of packing problems is to find the best way to pack a set of items of given sizes into bins with fixed capacities. In a packing problem known as bin packing, there can be multiple bins of equal capacity. A goal in such bin packing is to find the smallest number of bins that will hold all the items (also referred to herein as workloads).
A VM placement or VM packing problem can be seen as a type of bin packing problem, where an aim is to pack or place a set of VMs instances to physical servers such that the number of physical servers is minimized. Both bin packing and VM placement problems have been discussed as being non-deterministic polynomial-time (NP) hard problems. For example, S. Rampersaud and D. Grosu, “Sharing-Aware Online Virtual Machine Packing in Heterogeneous Resource Clouds,” IEEE Transactions on Parallel and Distributed Systems, vol. 28, no. 7, pp. 2046-2059, 1 Jul. 2017 (“Rampersaud”), discusses a sharing-aware VM packing problem which has the same objective as a standard VM packing problem (i.e., minimize the number of the bins), but allows the VM instances collocated on the same physical server to share memory pages, thus reducing the amount of allocated cloud resources.
A VM packing problem can also be considered as a multi-dimensional (MDBPP) or multi-capacity bin packing (MCBPP) problem. MDBPP and MCBPP are two versions of a bin packing problem. In MCBPP, each bin has multiple capacities, and each item has multiple weights. In Bassem, C., Bestavros, A.: “Multi-Capacity bin packing with dependent items and its application to the packing of brokered workloads in virtualized environments”, Future Gener. Comput. Syst. 72, 129-144 (2017) (“Bassem”), the MCBPP version was considered for resource allocation in the cloud. Bassem considered multi-dimensional bins, where each dimension represents a resource it offers (e.g., network, CPU). The resources consumed over a bin's different dimensions are function of the items packed into that bin and can vary according to the collocation of the items.
In another approach, the MDBPP version was considered for VM packing. A difference between MCBPP and MDBPP is that in MCBPP, once an item is packed into a bin, its resources (e.g., CPU, memory) cannot be allocated to another item in that bin. While in the MDBPP problem, the capacity can be shared and is not dedicated to a single item. For example, Pachorkar N, Ingle R., “Multi-dimensional affinity aware VM placement algorithm in cloud computing”, Int J Adv Comput Res. 2013; 3(4):121 (“Pachorkar”) considered MDBPP class for VM packing. Pachorkar considered multiple dimensions (e.g., CPU, memory) together when allocating VMs to Physical Machines (PMs). Memory and network affinity were also considered among the VMs during VM placement. Pachokar describes that, after initial placement of VMs on different PMs, a system finds the memory and network affinity between different VMs hosted on different PMs and tries to place these VMs on the same PM.
In knapsack packing, there is a single container/knapsack, and the items have values and sizes. A goal is to pack a subset of items that have a maximum total value. Unlike the bin packing problem, the number of bins is fixed. El Motaki, S., Yahyaouy, A., Gualous, H. et al, “Comparative study between exact and metaheuristic approaches for virtual machine placement process as knapsack problem”, J Supercomput 75, 6239-6259 (2019) (“Motaki”), discuss use of the knapsack packing problem for the VM packing problem. Camati RS, Lima L Jr, Calsavara A, “Solving the virtual machine placement problem as a multiple multidimensional knapsack problem”, ICN 2014: The Thirteenth International Conference on Networks (“Camati”) discusses using a multidimensional multiple knapsack packing version to solve a VM placement problem. In multi-dimensional knapsack packing, items have more than one quantity such as weight and volume, and the knapsack has a capacity for each quantity. In a multiple knapsack problem, there are multiple knapsacks and a goal is to maximize the total value of packed items in all knapsacks. Camati describes a goal to maximize the placement ratio, considering the number of placed VMs and the total number of requests in the queue. The maximum capacity of knapsacks were chosen statically.
In another approach, some challenges are discussed for existing algorithms (e.g., First Fit, Best Fit) to solve the packing problem for VM placement. See e.g., Kumaraswamy, S., & Nair, M. K., “Bin packing algorithms for virtual machine placement in cloud computing: A review”, International Journal of Electrical and Computer Engineering, 9 (1), 512-524 (2019) (“Kumaraswamy”). A challenge is that there may be affinity rules between two VMs in which they may be required to be placed together on the same bin. Another challenge may be that existing algorithms do not consider dynamic VM capacity since VMs cannot be static over their lifetime. This is also applicable for bin sizes.
Thus, in the approaches described above, bin packing, VM packing, and knapsack packing have been referred to as NP-hard problems for which different heuristics have been described.
In the context of automation of multi-layer cloud stacks, the VM packing problem may be further extended to a recurring (also referred to as embedded) bin packing problem, where items of different size must be packed into other items of flexible sizes, then in other items of flexible sizes, and so on until an item with a fixed size is reached. Existing approaches, e.g. approaches discussed above, lack a solution to such a problem. While some approaches discuss resources dimensioning of a single layer of cloud stacks, e.g., VMs, dependencies and hierarchies of cloud stack layers are ignored.
Certain aspects of the disclosure and their embodiments may provide solutions to these or other challenges.
Various embodiments of the present disclosure include a method for adaptive resource dimensioning of cloud stacks with multiple virtualization layers during service instance design and assign time. The method can be an extension to a workload assignment workflow. In some embodiments, the method handles resource dependencies between virtualization layers, e.g., OpenStack and Kubernetes.
In some embodiments, a dynamic model is created for the problem, where the size of an item(s) is defined in multiple dimensions (e.g., CPU, memory, and storage capacity) and consumed by their inner (also referred to as “child”) items accordingly. An item itself consumes volumes (e.g., resources) from the bin (e.g., parent) it is packed into, therefore the volume (e.g., resources) it offers to its child is less than or equal to the volume (e.g., resources) it consumes from its parents. The difference between the consumed and offered volume (e.g., resources) is the item's overhead. Physical resources space (e.g., CPUs) is converted to virtual resources space (e.g., virtual central processing units (vCPUs)) while maintaining the connection between the physical and virtual resources. This conversion is transparent to the workload assignment workflow.
In some embodiments, the dynamic method can be applied to multi-layered software virtualization, where items and bins can be servers, virtualization layers, or applications.
In some embodiments, during a virtual network function (VNF) embedding problem, some of the items discussed above are immutable while other items are mutable for a time window. In an example embodiment, instead of pre-defining a resource requirement of a virtualization layer (e.g., a K8S cluster), during a workload assignment process, the method considers these as dynamically resizable entities. In some embodiments, a resource requirement of earlier placed virtualization elements is adjusted.
In some embodiments, the method includes a heuristic VNF embedding algorithm, which homes/assigns VNFs to recurring virtualization layers which are designed and dimensioned in runtime.
Certain embodiments may provide one or more of the following technical advantages. The method may solve a multi-layered and multi-dimensional bin packing problem, where each item may appear as a bin for other items using adaptive dimensioning that considers a hierarchical relationship of virtualization layers, including that a change in a component's resource requirement may affect ancestor or successor nodes in the hierarchy. As a consequence, better utilization of system resources may be obtained. For example, instead of overprovisioning/under-provisioning, a resource footprint of virtualization layers may be intelligently adjusted, and less virtualization layer overhead may be incurred.
Still referring to
Still referring to
Still referring to
Operations of the method will now be discussed further with respect to an example embodiment. While the example embodiment is explained in the non-limiting context of the following sequence of operations from
Referring first to
Still referring to
For the selected server iteration 503, a second process to find a bin for the workload (“Find Bin process”) is called 600. The Find Bin process is called in order to check whether the workload can be assigned to a bin within the current server. If yes 617, the whole process stops and returns 619 True, otherwise, the next server is checked in iteration 503.
If there are no more servers to check in iteration 503, the Find Server process selects 505, 509 the best bin template for the workload. The rationale to try to allocate a bin is that the genuine workload may not be directly packable to servers but only to some container bins (e.g., VM). The selected bin template will be treated as a workload to recursively call the Find Server process 500. To help ensure that the capacity requirements of the workload can be fulfilled, when the template is selected 509, its capacity is also dimensioned accordingly. In a further example embodiment, in a case where the workload (e.g., a K8S node, requiring 6 virtual central processing units (vCPUs)) needs to be packed in a VM, a VM template will be selected. The capacity of this VM will be at least the sum of the workload requirement and the virtualization overhead (e.g., K8S layer over a VM layer), which is 8 vCPUs if the overhead is 2 vCPUs.
If the bin template (as a workload) can be successfully assigned 511, then the Find Server process goes to the Find Bin process 600 for the workload to check 617, 619, 621 whether the workload can be assigned to the newly created bin. It is noted that the workload is swapped back to the object (bin or workload), which initiated the template selection.
Otherwise, the Find Server process tries to create 515 a new server and call the Find Server process again. It is noted this effort is tried 513, 523 one time. For purposes of discussion, and without loss of generality, it is assumed that a new server (in other words, an empty server) 517, 519 can accommodate any genuine workload with any number of dependent bin requirements. If servers can be of different resource dimensions, then one needs to either start with the biggest server or iterate through different server sizes before concluding that an allocation is not possible.
Referring next to
In operation 603, the Find Bin process 600 check whether the bin supports the workload.
If yes, the Find Bin process 600 checks 605 whether the bin can host the workload. This operation includes checking the capacity of the resource of a bin that is needed to host the workload. If the bin has enough resources to host the workload, the workload is assigned 615 to the bin. If the bin does not have enough resources to host the workload, a third process 700 is invoked to extend the resources of the bin (Extend Bin process 700) with a pre-calculated 611 extra capacity. If the bin resources are extended successfully 711, the workload is assigned 607 to the bin.
If the bin does not support the workload, the Find Bin process 600 checks 613, 615 whether there are more items that are sub bins in the bin that can host the workload. If yes, the same process is invoked recursively i.e., the Find Bin process 600 to find a bin for the item.
If there are no more items in the bin, the Find Bin process 600 returns 623 false.
Now referring to
The Extend Bin process 700 checks 703 whether the bin can be extended with a given extra capacity (e.g., a vector of resources for multiple dimensions of extra capacity, such as extra capacity for CPU, memory resources, etc.).
If the bin can be extended, an extension is performed 705.
If not, the Extend Bin process 700 checks 709 whether the bin has a parent. In a further example embodiment, if the bin is a server, then it does not have a parent, hence cannot be extended. If that is the case, the Extend Bin process 700 returns 715 false. If the bin has a parent, the Extend Bin process 700 checks whether the parent can be extended so that an extension of the current bin can be supported. This is done by recursively invoking the Extend Bin process 700.
If the parent bin is not successfully extended 711, the Extend Bin process 700 returns 713 False. It the parent bin can be extended, an extension is performed 705 and the Extend Bin process returns 707 true for the extended parent bin.
Referring now to
A further example embodiment is now discussed. In this example embodiment, each physical server has a capacity of 28 CPUs. It is noted that each server also has a memory capacity, however, for ease of discussion, this example embodiment is directed to the CPU dimension. Further, in the example embodiment, each VM takes 12 CPUs. The virtualization overhead includes (i) hosting a VM layer over a physical layer requires 4 CPUs, and (ii) hosting a K8S layer over a VM requires 2 CPUs. This example embodiment has four workloads (identified as Apps 1-4 below). The following table summarizes the virtualization and capacity requirements for the four workloads, the VM layer, and the K8S layer:
In this example embodiment, performance of the method of the present disclosure produces an output of information for two servers. In accordance with some embodiments, the outputted information is visually illustrated in the block diagram of
The capacity of each element (App, VM, etc.) are shown in
In some embodiments, the method models multiple virtualization layers as a hierarchical, multi-instance, multi-dimensional bin packing problem. In some embodiments, this is a recurring bin packing problem, where each item in a bin can become a bin itself hosting other items.
In some embodiments, the method includes an overhead model for the bins corresponding to the real virtualization layers' overheads.
In some embodiments, the method includes a hosting model, which can allow constrained assignment of items to bins (e.g., a container can only be assigned to a container virtualization environment).
In some embodiments, the model includes a heuristic algorithm that adaptively creates and/or resizes bins (e.g., virtualization layers) to accommodate workloads.
In some embodiments, the method designs virtualization layers and workload assignment together.
In some embodiments, the method includes modeling virtualization layer behavior.
In some embodiments, the method includes building a digital twin to simulate virtualization layer lifecycle management actions.
In some embodiments, the method includes determining a virtualization layer sharing strategies.
In some embodiments, the method includes applying a bin-packing technique to organize and dimension virtualization layers for workload hosting.
In some embodiments, the method includes modeling virtualization layer overheads.
In some embodiments, the method includes allocating servers to virtualization layers, virtualization layers to virtualization layers, and workloads to virtualization layers.
In some embodiments, the method includes interleaving the workload assignment with the dimensioning of virtualization layers.
Network node 1000 may be provided, for example, as discussed herein with respect to network node 411 of
For ease of discussion, a network node will now be described with reference to
As discussed herein, operations of the network node may be performed by processing circuitry 1003, network interface 1007, and/or transceiver. For example, processing circuitry 1003 may control transceiver to transmit downlink communications through transceiver over a radio interface and/or to receive uplink communications through transceiver over a radio interface. Similarly, processing circuitry 1003 may control network interface 1007 to transmit communications through network interface 1007 to one or more other network nodes and/or to receive communications through network interface from one or more other network nodes, servers, etc. Moreover, modules may be stored in memory 1005, and these modules may provide instructions so that when instructions of a module are executed by processing circuitry 1003, processing circuitry 1003 performs respective operations (e.g., operations discussed herein with respect to example embodiments relating to network nodes). According to some embodiments, network node 1000 and/or an element(s)/function(s) thereof may be embodied as a virtual node/nodes and/or a virtual machine/machines.
According to some other embodiments, a network node may be implemented as a core network node without a transceiver. In such embodiments, transmission to a server, another network node, etc. may be initiated by the network node 1000 so that transmission to the server, network node, etc. is provided through a network node 1000 including a transceiver (e.g., through a base station or radio access network (RAN) node). According to embodiments where the network node is a RAN node including a transceiver, initiating transmission may include transmitting through the transceiver.
Embodiments of the network node may include additional components beyond those shown in
Although network node 1000 is illustrated in the example block diagram of
Determining, calculating, obtaining or similar operations described herein may be performed by processing circuitry, which may process information by, for example, converting the obtained information into other information, comparing the obtained information or converted information to information stored in the network node, and/or performing one or more operations based on the obtained information or converted information, and as a result of said processing making a determination.
In certain embodiments, some or all of the functionality described herein may be provided by processing circuitry executing instructions stored on in memory, which in certain embodiments may be a computer program product in the form of a non-transitory computer-readable storage medium. In alternative embodiments, some or all of the functionality may be provided by the processing circuitry without executing instructions stored on a separate or discrete device-readable storage medium, such as in a hard-wired manner. In any of those particular embodiments, whether executing instructions stored on a non-transitory computer-readable storage medium or not, the processing circuitry can be configured to perform the described functionality. The benefits provided by such functionality are not limited to the processing circuitry alone or to other components of the computing device, but are enjoyed by the network nodes as a whole.
Operations of a network node (e.g., network node 1000) (implemented using the structure of
Referring to
Referring now to
In some embodiments, the existing bin and the new bin, respectively, include a bin that is adaptive over a period of time to be one or more of a physical server, a virtual machine, a virtualization layer from the plurality of virtualization layers, and an application for the workload or for another workload.
In some embodiments, the assigning (1101) the workload to a bin includes performing a first process to find a server. The first process includes (i) iterating through existing servers to select a server, and (ii) for a selected server, checking whether the selected server can host the workload. The assigning (1101) further includes when checking whether the selected server can host the workload, performing a second process to find a bin of the server. The second process including checking whether a bin of the server can support and host the workload and whether the bin has the resource capacity to support the dimension of the workload; and performing one of (i) making the assignment to the bin of the server when the bin of the server has the resource capacity to support the dimension of the workload, and (ii) checking whether there is a sub-bin in the bin of the server that can host the workload when the bin of the server lacks the resource capacity to support the dimension of the workload.
Referring to
Referring to
In some embodiments, the create a new bin includes: selecting a bin template for the workload; dimensioning a capacity of the bin template to result in the dimensioned resource capacity that supports the dimension of the workload; treating the selected bin template as the workload to recursively invoke the second process; recursively invoking the second process; and performing the second process, wherein the bin in the second process is the new bin.
Referring to
In some embodiments, the third process to extend the bin includes checking whether the bin can be extended with the additional resource capacity; and performing one of (i) when the bin can be extended, extending the bin, and (ii) when the bin cannot be extended, checking whether the bin has a parent bin.
Referring to
In some embodiments, the extending the bin includes calculating the additional resource capacity. The calculating includes one of (i) when the bin has a virtualization layer, determining a difference between the dimension of the workload and a residual capacity of the bin; and (ii) when the bin lacks a virtualization layer, determining the difference between the dimension of the workload and a residual capacity of the bin, and adding a virtualization overhead to the difference.
In some embodiments, the network node includes one of an operation support system (OSS) node and a business support system (BSS) node.
The various operations from the flow charts of
Further definitions and embodiments are discussed below.
In the above-description of various embodiments of the present disclosure, it is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the present disclosure. Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which the present disclosure belong. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of this specification and the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
When an element is referred to as being “connected”, “coupled”, “responsive”, or variants thereof to another element, it can be directly connected, coupled, or responsive to the other element or intervening elements may be present. In contrast, when an element is referred to as being “directly connected”, “directly coupled”, “directly responsive”, or variants thereof to another element, there are no intervening elements present. Furthermore, “coupled”, “connected”, “responsive”, or variants thereof as used herein may include wirelessly coupled, connected, or responsive. As used herein, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. Well-known functions or constructions may not be described in detail for brevity and/or clarity. The term “and/or” includes any and all combinations of one or more of the associated listed items.
It will be understood that although the terms first, second, third, etc. may be used herein to describe various elements/operations, these elements/operations should not be limited by these terms. These terms are only used to distinguish one element/operation from another element/operation. Thus, a first element/operation in some embodiments could be termed a second element/operation in other embodiments without departing from the teachings of the present disclosure. The same reference numerals or the same reference designators denote the same or similar elements throughout the specification.
As used herein, the terms “comprise”, “comprising”, “comprises”, “include”, “including”, “includes”, “have”, “has”, “having”, or variants thereof are open-ended, and include one or more stated features, integers, elements, steps, components or functions but does not preclude the presence or addition of one or more other features, integers, elements, steps, components, functions or groups thereof. Furthermore, as used herein, the common abbreviation “e.g.”, which derives from the Latin phrase “exempli gratia,” may be used to introduce or specify a general example or examples of a previously mentioned item, and is not intended to be limiting of such item. The common abbreviation “i.e.”, which derives from the Latin phrase “id est,” may be used to specify a particular item from a more general recitation.
Example embodiments are described herein with reference to block diagrams and/or flowchart illustrations of computer-implemented methods, apparatus (systems and/or devices) and/or computer program products. It is understood that a block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented by computer program instructions that are performed by one or more computer circuits. These computer program instructions may be provided to a processor circuit of a general purpose computer circuit, special purpose computer circuit, and/or other programmable data processing circuit to produce a machine, such that the instructions, which execute via the processor of the computer and/or other programmable data processing apparatus, transform and control transistors, values stored in memory locations, and other hardware components within such circuitry to implement the functions/acts specified in the block diagrams and/or flowchart block or blocks, and thereby create means (functionality) and/or structure for implementing the functions/acts specified in the block diagrams and/or flowchart block(s).
These computer program instructions may also be stored in a tangible computer-readable medium that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable medium produce an article of manufacture including instructions which implement the functions/acts specified in the block diagrams and/or flowchart block or blocks. Accordingly, embodiments of the present disclosure may be embodied in hardware and/or in software (including firmware, resident software, micro-code, etc.) that runs on a processor such as a digital signal processor, which may collectively be referred to as “circuitry,” “a module” or variants thereof.
Many variations and modifications can be made to the embodiments without substantially departing from the principles of the present disclosure. All such variations and modifications are intended to be included herein within the scope of the present disclosure. Accordingly, the above disclosed subject matter is to be considered illustrative, and not restrictive, and the examples of embodiments are intended to cover all such modifications, enhancements, and other embodiments, which fall within the spirit and scope of the present disclosure. Thus, to the maximum extent allowed by law, the scope of the present disclosure is to be determined by the broadest permissible interpretation of the present disclosure including the examples of embodiments and their equivalents, and shall not be restricted or limited by the foregoing detailed description.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2021/059469 | 10/14/2021 | WO |