Network function virtualization (NFV) is an emerging technology that allows virtualization of the physical networking elements of networks and allows them to work together. This new trend is leveraging information technology (IT) virtualization technology and is also demanding specific behaviors more related to telecommunications company (Telco) environments. One of the more relevant behaviors is resource allocation, as the needs of a router are different from those of a server; for a classical IT application, just an internet protocol (IP) and random access memory (RAM) can be enough, but for a networking application, other parameters such as bandwidth are also relevant.
Various features of the present disclosure will be apparent from the detailed description which follows, taken in conjunction with the accompanying drawings, which together illustrate certain example features, and wherein:
In the following description, for purposes of explanation, numerous specific details of certain examples are set forth. Reference in the specification to “an example” or similar language means that a particular feature, structure, or characteristic described in connection with the example is included in at least that one example, but not necessarily in other examples.
In the IT virtualization world, resources can be summarized mainly as cores (computing power), memory (computing power), storage (storage) and number of networks/ports (networking) and these are the resources that virtualization technologies such as Openstack™ is capable of managing.
For networking applications, the number of TYPES of resources is not only bigger but also is not necessarily a fixed number and also evolves. For example, an edge router may operate with cores, memory, disk and bandwidth and a core router may operate with dedicated RAM, some cores dedicated, some cores shared, a physical port and connection to a virtual local area network (VLAN) instead of a virtual extensible local area network (VXLAN).
Known systems are hardcoded mainly to the IT resources and Telcos do not have the flexibility to adapt those systems to their specifications. Adding new types of resources or try to modify known systems such as Openstack™ to perform something not round-robin-based is difficult, as this would involve coding that affects the root of the Openstack™ behavior (and is therefore extremely risky). Also, the scheduling of Openstack™ is virtual machine (VM) based so it has almost no visibility of an application, groups of applications or customer/users that owns those applications. Also, Openstack™ is very structured in regions, zones and host (a vertical approach) and horizontal grouping, or even worse, mixtures of vertical and horizontal grouping, are extremely limited.
In the examples depicted in
In the examples depicted in
In some examples, resource allocation reserves the resources for creating the complete virtualized network function. In some examples, physical resources are represented as artifact instance trees. In some examples, reservation is represented as an ALLOCATE relationship between physical and virtual resources as depicted in
Certain examples described herein involve a system which, instead of looking for a fixed set of types and fixed set of resource candidates, it receives as an input, one or more specific resource allocation (or ‘assignment’) rules allowing each execution to search for different resources (based on the application needs) or in a different pool of resources.
System 200 comprises a controller (for example controller 102 of
At block 220, the one or more processors are configured to select second input data defining one or more resource allocation rules.
At block 230, the one or more processors are configured to receive third input data defining an application request.
At block 240, the one or more processors are configured to, on the basis of the first, second and third input data, allocate physical resources from the pool to virtual resources to implement the application request.
In certain examples, the application request comprises a virtual network function instance identifier.
In certain examples, the one or more resource allocation rules define one or more rules for allocating physical resources to virtual resources.
In certain examples, the first input data defining a pool of available physical resources defines one or more of one or more server cores, one or more server memories, one or more server disks, one or more network names, one or more subnetwork names, and one or more Internet Protocol (IP) addresses.
In certain examples, the virtual resources comprise a virtual machine comprising one or more of virtual cores, virtual memory, virtual disks, and virtual ports.
In certain examples, the application request comprises at least one parameter associated with affinity of physical resources. In certain examples, the application request comprises at least one parameter associated with anti-affinity of physical resources.
In certain examples, the second input data defining one or more resource allocation rules is selected on the basis of application type or situation. In some such examples, the application type or situation comprises a normal deployment. In other such examples, the application type or situation comprises a disaster recovery.
Block 310 involves maintaining data defining a pool of available physical resources. The maintained data may for example be stored in a database.
Block 320 involves identifying data defining one or more resource allocation rules. The resource allocation rules may for example be stored in a database.
Block 330 involves receiving an application request.
Block 340 involves on the basis of the maintained data, the identified data and the received application request, allocating physical resources from the pool to virtual resources to implement the application request.
In certain examples, the identified data is identified on the basis of application type or situation. In some such examples, the application type or situation comprises a normal deployment. In other such examples, the application type or situation comprises a disaster recovery.
In certain examples, the maintained data defining a pool of available physical resources defines one or more of one or more server cores, one or more server memories, one or more server disks, one or more network names, one or more subnetwork names, and one or more Internet Protocol (IP) addresses.
In certain examples, the virtual resources comprise one or more of virtual cores, virtual memory, virtual disks, and virtual ports.
According to certain examples, resource allocation rules description supports grouping and targets, meanwhile the execution combines the resources and rules (for example affinity restrictions) of the application with the resource allocation rules in order to match candidates properly on the selected resource pool.
Certain examples involve computation based on information stored in a database, such that adding a new type of resource or modifying an existing resource allocation rule can be carried out easily in order to modify behavior. According to an example, for a resource allocation rule that is trying to find virtual cores on a VM and type of hypervisor trying to match physical cores and type of hypervisor of a server, a parameter is added to the resource allocation rule to filter only servers marked as a certain type (for example servers of type GOLD).
In certain examples, resource allocation is computed based on application needs, resource allocation rules and the resource pool which allows examples to perform unique operations not present in other schedulers.
In some examples, each execution is made different. For example, GOLD servers may be looked for on a first execution, and if no GOLD servers are available, any type of server is looked for on a second execution.
In some examples, restrictions are added, removed and/or modified on demand. For example, in a first year, certain applications cannot run on a kernel-based virtual machine (KVM), but with subsequent versions in a second year, certain applications can run on KVM and EX.
Some examples take into consideration other application parameters such as affinity (for example where all cores are on the same server) or anti-affinity (for example where certain VMs are not on the same server).
According to some examples, resources are grouped based on arbitrary concepts that can be mapped as different tags, for example GOLD server, NEW server, YOUR server or server BELONGS TO A CERTAIN ENTITY, etc.
In some examples, a decision as to which resource allocation rule (or rules) to use is based upon on the application type or situation (for example normal deployment or disaster recovery).
In certain examples, a physical resource pool comprises a representation of the physical resources of the client infrastructure. In some examples, some of the resources are managed by a virtualized interaction manager (VIM), and in other examples, there are other physical resources such as networks or network elements which are not be managed by the VIMs.
Once a VM has been is allocated according to certain examples, this enables identification of a VIM where the VM can be created, a region, an availability zone and a host, and an Openstack™ network (or networks) where a VM can be connected.
In certain examples, each server is connected to a VIM through a hypervisor. In certain examples, credentials of the VIM are contained in an authentication artifact.
In certain examples, the number of cores, memories and disks of a server are indicated using an INFO.Amount attribute.
In certain examples, a network Openstack™ GENERAL.Name attribute indicates the name of a network a VM will be connected to. If the network does not exist in Openstack™, it will be created according to certain examples.
In certain examples, a subnetwork Openstack™ GENERAL.NAME attribute indicates the name of a subnetwork a VM will be connected to. If the subnetwork does not exist in Openstack™, it will be created according to certain examples. In some examples, a subnetwork specifies an IP address range within its attributes.
Certain examples involve trying to create the network if the STATUS is INSTANTIATED. In certain examples, the same applies for subnetworks. After a successful creation, the status is updated to ACTIVE according to certain examples. In some examples, for the next VM to be created that uses a given network, this network will not be created.
In certain examples, compute and networking artifacts store internal configuration that allows an appropriate JavaScript™ Object Notation (JSON) message to be composed depending on the type and version of the VIM: Openstack Icehouse™, Hewlett Packard Helion™, etc.
In certain examples, an attribute GENERAL.Name of artifacts AVAILABILITY_ZONE and SERVER allows selection of the server and availability zone in the VIM when creating the VM.
In some examples, an IPADDRESS artefact represents one single, fixed IP address. In other examples, an IPADDRESS artefact represents a number or range of dynamic host configuration protocol (DHCP) IP addresses.
In the examples depicted in
In the examples depicted in
Certain system components and methods described herein may be implemented by way of machine readable instructions that are storable on a non-transitory storage medium.
Instruction 640 is configured to cause the processer 610 to maintain a database defining a pool of available physical resources.
Instruction 650 is configured to cause the processer 610 to receive an application request comprising a virtual network function instance identifier.
Instruction 660 is configured to cause the processer 610 to select, on the basis of application type or situation, a resource allocation rule from a plurality of resource allocation rules for allocating physical resources to virtual resources; and
Instruction 670 is configured to cause the processer 610 to, on the basis of content of the maintained database, the received application request and the selected resource allocation rule, implement the application request by allocating physical resources from the pool to virtual resources comprising at least one virtual machine.
The non-transitory storage medium can be any media that can contain, store, or maintain programs and data for use by or in connection with an instruction execution system. Machine-readable media can comprise any one of many physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable machine-readable media include, but are not limited to, a hard drive, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory, or a portable disc.
The preceding description has been presented to illustrate and describe examples of the principles described. This description is not intended to be exhaustive or to limit these principles to any precise form disclosed. Many modifications and variations are possible in light of the above teaching.
This application is a continuation of U.S. patent application Ser. No. 16/071,849, filed on Jul. 20, 2018, which is a U.S. National Stage Filing under 35 U.S.C. 371 from International Application No. PCT/EP2016/051266, filed on Jan. 21, 2016, the entire disclosures of which are incorporated by reference herein.
Number | Name | Date | Kind |
---|---|---|---|
7765307 | Kritov et al. | Jul 2010 | B1 |
8918490 | Rattner et al. | Dec 2014 | B1 |
9052935 | Rajaa | Jun 2015 | B1 |
9172621 | Dippenaar | Oct 2015 | B1 |
9280392 | Boss | Mar 2016 | B1 |
9350681 | Kitagawa et al. | May 2016 | B1 |
9497136 | Ramarao et al. | Nov 2016 | B1 |
9749294 | Marquardt et al. | Aug 2017 | B1 |
10116571 | Bertz et al. | Oct 2018 | B1 |
20050081208 | Gargya et al. | Apr 2005 | A1 |
20070038843 | Trivedi et al. | Feb 2007 | A1 |
20070180087 | Mizote et al. | Aug 2007 | A1 |
20080147555 | Cromer et al. | Jun 2008 | A1 |
20090199285 | Agarwal et al. | Aug 2009 | A1 |
20100169948 | Budko | Jul 2010 | A1 |
20110231696 | Ji et al. | Sep 2011 | A1 |
20120136725 | Alexander et al. | May 2012 | A1 |
20130042003 | Franco et al. | Feb 2013 | A1 |
20130073724 | Parashar et al. | Mar 2013 | A1 |
20130160014 | Watanabe et al. | Jun 2013 | A1 |
20130185667 | Harper et al. | Jul 2013 | A1 |
20130275975 | Masuda et al. | Oct 2013 | A1 |
20130326531 | Kenkre | Dec 2013 | A1 |
20140007128 | Schroth | Jan 2014 | A1 |
20140068056 | Simitsis et al. | Mar 2014 | A1 |
20140270494 | Sawhney et al. | Sep 2014 | A1 |
20140278326 | Sharma et al. | Sep 2014 | A1 |
20140372167 | Hillier | Dec 2014 | A1 |
20150058459 | Amendjian et al. | Feb 2015 | A1 |
20150135003 | Cota-Robles et al. | May 2015 | A1 |
20150254248 | Burns et al. | Sep 2015 | A1 |
20150331763 | Cao et al. | Nov 2015 | A1 |
20150341223 | Shen et al. | Nov 2015 | A1 |
20150365342 | McCormack | Dec 2015 | A1 |
20160203060 | Singh et al. | Jul 2016 | A1 |
20160210175 | Morimura | Jul 2016 | A1 |
20160328258 | Iwashina | Nov 2016 | A1 |
20160364226 | Takano | Dec 2016 | A1 |
20160381133 | Palavalli | Dec 2016 | A1 |
20170024237 | Itoh et al. | Jan 2017 | A1 |
20170024256 | Mukherjee | Jan 2017 | A1 |
20170048008 | Yi | Feb 2017 | A1 |
20170118616 | Kothari et al. | Apr 2017 | A1 |
20170228250 | Wang | Aug 2017 | A1 |
20170324612 | Perez et al. | Nov 2017 | A1 |
20180253332 | Andrianov et al. | Sep 2018 | A1 |
Number | Date | Country |
---|---|---|
2940968 | Nov 2015 | EP |
Entry |
---|
“International Application Serial No. PCT/EP2016/051266, International Search Report dated Oct. 5, 2016”, 4 pgs. |
“International Application Serial No. PCT/EP2016/051266, Written Opinion dated Oct. 5, 2016”, 6 pgs. |
The NPLs and Foreign Reference can be found in the U.S. Appl. No. 16/071,849. |
Number | Date | Country | |
---|---|---|---|
20230229496 A1 | Jul 2023 | US |
Number | Date | Country | |
---|---|---|---|
Parent | 16071849 | US | |
Child | 18188444 | US |