The present invention relates to a method, an apparatus and a computer program product related to service function chain management. More particularly, the present invention relates to a method, an apparatus and a computer program product related to load and software configuration control of service function chains.
A mean to reduce operator network TCO is network functions virtualization and using common Data Center (DC) hardware for different network functions (see documents of ETSI NFV (Network Function Virtualization study group)).
This applies for service function chaining, too. Service Function Chaining typically observe, alter or even terminate and re-establish session flows between user equipment and application platforms (Web, Video, VoIP etc.) by invoking, in a given order, a set of Service Functions. Service functions involved in a given service function chain (SFC, sometimes denoted as “chain” hereinafter) may include load-balancing, firewalling, intrusion prevention, etc.
A given SFC-enabled domain may involve several instances of the same Service Function. Service function instances can be added to or removed from a given SFC. SFs can be co-located or embedded in distinct physical nodes, or virtualized.
Today, mobile network operators are providing numerous value adding packet processing functions in the so called “Gi-LAN” (such as NAT, TCP optimizers, firewalls, video optimizers). They may be considered as SFCs, wherein the SFCs are implemented by chains of boxes the traffic needs to be routed through. Inefficiency of those solutions and complexity to manage this have resulted in standardization activities in 3GPP Rel.13 (Flexible mobile service steering FMSS) and IETF (SFC, Service Function Chaining).
E.g. in most cases of the network functions (security, NAT, load balancing, etc.) only a fraction of a flow requires processing by all functions while the remaining part of the flow requires limited or other processing. By flexible flow routing the amount of required resources can be decreased.
A next level of network flexibility is expected to be introduced with 5 G. Research projects are being setup to introduce tailored network functions depending on the offered end to end service (e.g. 5 G NORMA).The idea is to decompose current network functions of the RAN and core and to select service specific both
This increases the need to efficiently compose and connect network services from a high number of elementary functional modules. Note that the technologies used and introduced here are independent of any access technology.
In general, it is aimed to implement those service functions (SF) most effectively and efficiently in the cloud. Current approaches (see below) just virtualize the former physical “box-functions” to become VNF (Virtual Network Functions) and use the connectivity framework provided by the data center (DC) to create the required data path.
But routing all user plane traffic through a DC is very challenging, see ETSI NFV INF005 documents:
“The dataplane throughput is a general concern for the NFV architecture [. . . ]. The throughput capacity of a virtualised network function deployment on an NFVI Node is a fundamental consideration as it is for network elements implementing physical network functions.”
Typical SFs in a SF chain are applications that involve some static and relatively small CPU code but have high requirements on throughput and/or on low latency. In terms of network function infrastructure (NFVI) this means limited computing requirements but high networking requirements.
Currently this problem is addressed by accelerating the data path processing of connectivity/networking in the cloud environment—today typically carried out by virtual switches.
There are approaches that certain OS functions (socket API and vNIC) are extended or that new, direct interfaces between networking and application function are introduced (e.g. using Intel's DPDK and or SR-IOV to bypass the host and/or guest OS for fast packet transfer).
Although by these measures the networking performance between virtual machines (VMs) that host the service functions can be accelerated—still a number of problems remain:
It may also be considered whether additional features that should turn an “IT-DC” (a general purpose DC such as a DC for Web Servers) into a “Telco cloud DC” (a DC dedicated for telecommunication services) will increase the cost and destroys the economy of scale if complexity is added that is not needed for the typical work load of an IT-DC. Thus, new measures to improve the Telco cloud DC services might increase the complexity of virtualization.
In the data plane (bottom part), a router (shown by a box marked by “X”), routes traffic to one of plural SFCs (in
That is, by virtualized computing and networking, the rigid Gi-LAN can be replaced with Service Chaining, a software framework based on virtualized appliances (or simply VNFs) connected by virtualized networking, automated and managed by a service chain orchestrator using VNF and cloud management APIs.
The following embodiments are exemplary. Although the specification may refer to “an”, “one”, or “some” embodiment(s) in several locations, this does not necessarily mean that each such reference is to the same embodiment(s), or that the feature only applies to a single embodiment. Single features of different embodiments may also be combined to provide other embodiments. Furthermore, words “comprising” and “including” should be understood as not limiting the described embodiments to consist of only those features that have been mentioned and such embodiments may contain also features/structures that have not been specifically mentioned.
In particular we use the concept of a virtual machine (VM) for describing the state of the art computing environment for software in data centres. The VM relates to a software appliance (application SW and operating system) running on top of a hypervisor providing the virtualization of physical computing resources. The reference to VM shall not exclude other virtualization environments like so called containers where the software applications can share also parts of the operating system or a combination of hypervisor based and container based virtualization.
The concepts we propose here as well as the prior art arrangement (
In summary the term VM is used only as an example on how to execute a service function or a complete SFC.
According to a first embodiment of the invention, there is provided a method for adaptive service function chain configuration, wherein a multitude of virtual machines are executed on a given hardware, and a single instance of a virtual machine is executing a complete service function chain, said service function chain comprising a multitude of service functions, the method comprising the steps of continuously measuring the processing load of each virtual machine which is executing a specific service function chain and in case that the processing load of a certain virtual machine exceeds a predefined load level, an additional virtual machine that is able to execute the specific service function chain will be activated to execute the specific service function chain while in case that the processing load of a certain virtual machine underruns a predefined load level, the work load of the specific service function chain of said virtual machine is transferred to another virtual machine that is able to execute the service function chain and in consequence the specific service function chain of said virtual machine is deactivated.
A variant of the above described embodiment of the invention could be a method that a virtual machine, which service function chain execution has been deactivated to any reasons, e.g. due to extremely low work load, can be used to take over the functionality of another virtual machine and execute service functions of another virtual machine which exceeds a predefined load level.
It may be another embodiment of the invention for load and software control among composite service function chains where necessary software components, which are needed to run a certain service chain functionality are loaded into all virtual machines. This enables the system, that any possible subset of service chains may be executed on those virtual machines without prior loading software components into this virtual machine.
Activation or deactivation of service function chains in virtual machines is controlled by a so-called resource manager. The resource manager is implemented by control and management entities, especially the Element Management System EMS.
There may also be embodiments, where only a subset of software components is loaded into a virtual machine. The loaded subset of software components however must be sufficient to execute one or several service chain functionalities.
The functionality that will be activated in any instance of a virtual machine during runtime has to be defined by a so-called service function chain descriptor. The descriptor defines what service functions constitute the SFC and in what topology they are arranged.
Another important aspect is that the system is able to react flexibly to the permanently changing load situation. Therefore means should be provided which enable the system to measure the processing load of any virtual machine. This can be achieved in different ways. One possibility is, that the virtual machine which is executing a service function chain itself comprises means for measuring the processing load. Another possible solution is, that the underlying infrastructure (HW or the operating system) provides means to measure the processing load of a virtual machine and this information is transferred to an infrastructure manager and the service function chain resource manager on a regular basis.
In an other embodiment the processing load is defined by relevant parameters like processor load, memory space, free or unused memory space and/or combination of this plus any other parameters that are suitable for load measuring.
Of course, the different embodiments described above will run on a hardware apparatus which comprises at least one processor and at least one memory, part of the memory including a computer program code wherein the at least one memory and the computer program code are configured, with the at least one processor, to cause the apparatus to perform any of the method steps described above.
As a further embodiment, a computer program product residing on a non-transitory computer readable medium is possible wherein the computer program comprises a program code for running a multitude of virtual machines as service chains on a given hardware, a program code to execute instances of a virtual machine which implement a complete service function chain, said service function chain comprising a multitude of service functions, further a program code for continuously measuring the processing load of each virtual machine which is executing a specific service function chain, together with a program code that controls the functionality that in case that the processing load of a certain virtual machine exceeds a predefined load level, an additional virtual machine that is able to execute the specific service function chain will be activated for executing the specific service function chain and program code that controls in case that the processing load of a certain virtual machine underruns a predefined load level, the work load of the specific service function chain of said virtual machine is transferred to another virtual machine that is able to execute the service function chain and in consequence the specific service function chain of said virtual machine is deactivated.
According to some embodiments of the invention, at least the following advantages are provided:
It is to be understood that any of the above modifications can be applied singly or in combination to the respective aspects to which they refer, unless they are explicitly stated as excluding alternatives.
Further details, features, objects, and advantages are apparent from the following detailed description of the preferred embodiments of the present invention which is to be taken in conjunction with the appended drawings, wherein
Herein below, certain embodiments of the present invention are described in detail with reference to the accompanying drawings, wherein the features of the embodiments can be freely combined with each other unless otherwise described. However, it is to be expressly understood that the description of certain embodiments is given for by way of example only, and that it is by no way intended to be understood as limiting the invention to the disclosed details.
Moreover, it is to be understood that the apparatus is configured to perform the corresponding method, although in some cases only the apparatus or only the method are described.
From the considerations above there is some doubt that the one to one mapping of physical service functions of a chain or even its decomposed components to virtual SF in VMs is an optimal migration of service function chaining into the cloud.
According to the architectures shown in
However, the state of the art in SW processing is much more advanced: usually different applications from different vendors are running on the same OS. E.g., apps in mobile devices or dynamic link libraries are provided by different vendors and are combined in application programs. In the Windows OS, for example, the media stream processing of media players can use different components from different vendors for e.g. source filters, de-multiplexing, decoding and rendering that are combined during runtime.
According to some embodiments of the invention, each VM comprises SW of all the SFs making up a SFC which is intended to run on the VM. The VM may comprise SW of all the SFs for all SFCs that are intended to run of the VM. Plural VMs may be equipped with the same SW. For the deployment, a SW image may be used.
Advantages of this approach over the prior art are at least one of the following:
The “Chain OS middleware” is controlled by a control entity. The control entity may be a part of EMS 305, a part of VNF Manager 306, a separate entity, or integrated with another functionality. Its function may be split distributed over plural entities such as distributed over EMS 305 and VNF manager 306. In
OSS/BSS 307, Orchestrator 308, and VIM 309 are shown for completeness and may fulfill their respective function as usual.
The control entity (e.g. EMS 305) in charge of configuring the one or more “Chain VNF” VMs contains among others two data tables:
One table that describes each chain (e.g. a graph template describing topology relationships between the SFs) including a chain representation that is to be downloaded to the Chain VNF VM to instantiate a specific chain, and a table of the SFs the chain(s) are build upon, wherein the table contains the SF descriptor/template.
The exact description of the SF chain that needs to be deployed on the VM is stored in the graph template. The graph template specifies how the SFs that compose the chain are connected. Such specification may be in the form of the list of composite SFs, but it is not limited to this kind of representation.
For managing the chain the SF descriptors/templates may additionally include further descriptions of the SF e.g. in the terms of load/computation requirements, assigned SF weight etc. Such additional information may be used during the VM instantiation and especially for VM reconfiguration from serving one chain to serving another chain. Depending on the number of users in the DC, different number of VMs may be initially instantiated for the individual SF chains with different requirements.
The interface for the chain management functions inside the Chain VNF resides within the entity called “Chain OS middleware” 303. Depending on the implementation, the control entity (e.g. VNF Manager 305 or EMS 306) or the Chain OS middleware or a functional distribution among them may provide following functions for managing the SF chains:
The SF may register itself to the control entity or the “Chain OS middleware” and provide input parameters about their supported interfaces. This could e.g. include support of packet header extensions according to some standard specification like IETF.
A SF graph may be configured that instantiates a particular SFC. The EMS or Chain OS may provide checks whether all SFs of the SFC can connect as required. The Chain OS may acknowledge the chain instantiation to external management entities such as the control entity, EMS or VNF Manager.
The Chain OS middleware may verify if all SFs needed for instantiation of a particular chain are already contained in the SW image of the VM. If this is not the case, it may inform the external management entities of missing SFs.
The Chain VNF (Chain OS middleware or other VNF internal management entity) may provide reports on the current SF Chain VM utilization to the EMS and VNF Manager. Such information is prerequisite for efficient dynamic scaling in/out of the VMs.
Some of the described functions may be part of the OS of the VM itself.
For each Chain VNF deploying one particular graph (related to a SFC with a Chain ID) the same dedicated load balancer will be assigned. All Chain VNFs, after successfully instantiating a graph, will register to the load balancer corresponding to that particular graph/Chain ID.
There are two options for implementing a SFC:
A static approach that the SW image contains already a runtime version of a SF chain. Everything (SFs and SFC(s) is already configured. This option might have more optimization potential for computational efficiency; or
A dynamic instantiation as shown in
As a result of such a process, different number of VMs for different SF Chains may be instantiated depending on the load requirements as shown in the example of
In
In the example of
The SF descriptors store the detailed information about each SF. This may include the detailed system requirements, priority, etc. The SF descriptors may contain the system requirements of the SF and optionally the additional parameters related to the exact SF implementation or operators' preferences.
The upper box comprises SF chain descriptors defining the service chain. In particular, it comprises the graph template. As may be seen on the right side of
The lower box on both sides of
An example of an instantiation process is illustrated in
The Orchestrator 308 allocates a certain portion of all available resources for instantiating the VMs that will deploy different SFCs, e.g. 100 VMs will be allocated for such a purpose according to input templates based on some network planning. In
The VNF Manager 306 informs the EMS 305 about the available running VMs that are dedicated for the instantiation of the SFCs.
The EMS 306 may take into consideration the expected traffic load and the characteristics of the SFCs, e.g. expected traffic profiles for each SFC in order to instantiate an appropriate number of different SFCs on each of the VMs and calculate the optimal work load distribution among them. E.g. the EMS 306 may decide that instantiation of chains on 100 activated VMs should follow the following schema: 40 VMs for SFC1, 30 VMs for SFC2, 10 VMs for SFC3, 10 VMs for SFC4, 10 VMs for SFC5. Thus, each of the VMs runs only one SFC. However, in other examples, some or all VMs may run plural SFCs.
The control entity (e.g. EMS or VNFM; in the example shown in
When a SFC is instantiated on the VM it can register to the corresponding load balancer. Thus, the load balancer may distribute packets for the respective SFC to the corresponding VMs.
According to the above configuration, the more efficient VM internal communication may be used between SFs of a SFC. This increases efficiency of SFC execution in the data plane. The overall number of VMs can be reduced due to higher resource efficiency if not each SF needs an own VM but only each SFC. This reduces also the virtualization overhead like the number of guest OS needed for the VNFs.
According to some embodiments of the invention, in DCs where SFCs with all their SFs are implemented on single VMs, the usage of the VMs may be adapted in a way such that the available resources are distributed among the SFC in an optimal way: during runtime VMs can be reconfigured in such a way that in one or more VMs with a load level under a certain threshold another chain can be activated for which a higher network throughput is required (e.g. because the currently allocated VMs for those SFC have a higher load level). Correspondingly, if a chain is underutilized in a VM (the LB of the chain selects this VM less often than a certain threshold), the VM may be reconfigured such that it does not run this chain any more, thus increasing capacity for other SFCs on the VM.
As criteria to determine whether a chain is underutilized in a VM, the load in the VM (in particular, if only the respective chain is running on that VM) may be used.
Alternatively, or in addition, the invocation frequency, i.e. how often the SFC on the VM is invoked, may be used. The invocation frequency may be measured at the VM or at the output side of the LB. If it is assumed that the LB distributes SFC invocation according to a specific rule, the invocation frequency of the SFC on a specific VM may also be determined from the total invocation frequency of the SFC (i.e. the demand for the SFC, which may be measured on the input side of the respective LB), and calculating the share of the specific VM under consideration. For example, if the rule of the LB is equal distribution on all VMs configured to run the SFC, the share is 1/(number of VMs configured to run the SFC).
These functions may be performed by the control entity (e.g. EMS for the SFC VNF, optionally in cooperation with the VNF manager for the SFC-VNF, see below).
This mechanism introduces an “inner loop” of control for optimal use of allocated resources/VMs for service function chaining in a given group of VMs for SF chaining. The term “inner loop” is used in contrast to “outer loop”, which denotes increasing or decreasing the overall number of VMs in the group for SF chaining, as according to the prior art. The operation of the “outer loop” is typically done by the orchestrator of the DC. The reconfiguration by the “outer loop” is a more heavy operation, i.e. it requires more CPU capacity and bandwidth, than the reconfiguration by the “inner loop”. Due to the “inner loop”, according to some embodiments of the invention, the “outer loop” needs to be done less often (or, in some cases, not at all).
With this “inner loop”, there is no need to instantiate new VMs as long as load can be distributed among already established VMs. The process can be performed much more dynamically than scaling the number of VMs (“outer loop”).
Advantages of the “inner loop” are at least one of the following:
Easier and very dynamic scaling in and out per chain;
Resources may be reused (by removing not used (underused) SF chains and deploying new SF chains on VMs already configured for service chaining);
Easier load balancing as load balancing may be performed per chain and need not be performed for individual SFs;
The number of VMs for SF chaining can be reduced as there is less need of resource over-provisioning.
Based on the information about SF chains and VMs utilization provided by the Chain VNF, the control entity (e.g. EMS and/or VNF Manager and/or a separate entity) may adjust the VM and SF chain allocation in the following way. The description assumes, as an example, a typical function split between the EMS and the VNF Manager as control entity. However, this function split is not restricting and any other conceivable function split between EMS, VNFM, and a separate control entity may be adopted.
The EMS in charge of configuring the “Chain VNF” VMs contains inter alia a load table containing the load status for all VMs configured to run an instance of a SF chain. The EMS collects the load information directly from the SFC-VMs (option 1) or indirectly from the VNF manager (option 2, VNFM may collect such information directly from the SFC-VMs or from the VIM). Based on internal policies (load thresholds, priorities of SF Chains, etc.) the EMS decides to reconfigure VMs. E.g., in order to meet the traffic demand, it may reconfigure a VM such that it is configured to run an instance of SFC-m instead of SFC-n. E.g., this may happen, if there is higher demand for some particular SF chains at the given time and on the other hand some VM running other chains are only little loaded. As other options, it my reconfigure the VM that it does not run a certain SFC any more, or that it is configured to run an instance of certain SFC in addition to instances of those SFCs it was already configured to run.
To provide a graceful migration, the EMS may inform the SFC VNF in advance about such operation. The SFC VM may then inform its load balancer to not assign new flows to this VM (especially if stateful functions are performed). Additionally, before final reconfiguration of the SFC-VM and shutting down the previously deployed SFC, the EMS may check in the load table of that SFC-VM if the VM is not longer loaded (all ongoing flows have been processed). In addition, it may check if the deployment of the new SFC is safe.
After the EMS has reconfigured the VM by downloading a new graph template, the VM may register itself by the new load balancer responsible for the new SF chain. Alternatively, EMS may inform the LB of the new SFC on the additional VM configured to run an instance of the new SFC.
In order to minimize the signaling required for regular updates of the load tables, alternatively (or in addition) a reduced reporting might be used. Instead of relying on regular reports on the load status of all VMs of a SF chain, the updates can be sent to the EMS only if some predefined thresholds are exceeded. Such thresholds may be predetermined or defined with respect to the used policies. For the efficient operation of the “inner control loop” two thresholds with respect to the SFC-VM load are of particular importance:
Max threshold—the upper limit for the SFC-VM load under which the operation and the load of the SFC-VM is considered to be normal. Once the load of the SFC goes above the “max threshold” the EMS should trigger the reconfiguration of one or more other VMs and deployment of an SFC running in the SFC-VM in the other VM(s).
Min threshold—the lower limit for the SFC-VM load under which the running of the separate VM for the SFC is not justified, i.e. the load of the SFC is so low that having a separate VM for such a load can be considered as a waste of resources. If the load of the SFC goes below the “min threshold”, that VM is a candidate for reconfiguration and deployment of another SFC for which more resources are needed e.g. SFC that reached the “max threshold”.
The EMS may determine the threshold values also depending on the number of VMs per SF Chain. Such threshold might be downloaded/configured in the SF Chain VNF by the EMS for reduced reporting. Therefore, the updates of the SFC-VM load tables on EMS can be triggered once the SFC-VM load goes beyond the “max threshold” or below “min threshold”. The EMS may take a time window or other history information into account. Based on internal policies (e.g. SFC priorities) the EMS can further take the actions to address the current traffic load and reconfigure the SFC-VMs accordingly.
The number of VMs and the type of the SF chains that are deployed on them may be dynamically adjusted based on e.g. the VMs utilization reports.
The need to reconfigure one or more VM may also be obtained from considering the demand for a certain SFC. This demand may be derived from the load on the respective load balancer. By dividing the demand by the number of VMs configured to run the respective SFC, the load on each of the VMs caused by this particular SFC may be estimated.
The control entity (e.g. EMS and/or VNF Manager) may also learn the typical traffic profiles and proactively react by allocating the VMs with deployed SF chains according to the expected traffic characteristics. E.g. if during the night there is higher demand for video optimizing SF chains the EMS reconfigure that particular SF chain on the higher number of already existing VMs.
Regardless of the way the SFC is instantiated on the Chain VNF, i.e. statically or dynamically as described hereinabove, there is a potential of resource optimization and efficient planning of the initial deployment of the SFCs based on the information stored in the graph templates and SF descriptors/templates contained in control entity (e.g. EMS) and in the descriptors for the orchestrator. In addition to the graph templates (providing the information about required connectivity between SFs), the SFC description can provide the information regarding traffic profile (e.g. time of the day with peak load for that chain etc.) or scaling policy and priority in case of handling resource shortages for a particular chain. Furthermore, based on the number of users in the DC, the traffic profile and system requirements of the individual SF of the chain the expected traffic load for each individual SFC may be estimated. All such additional information stored in the Chain template can serve as a valuable input for the resource planning during the SFC instantiation.
According to some embodiments of the invention, the “outer control loop” may be still performed, in addition to the “inner control loop”:
By removing the VMs if the overall load level of the “SFC-VMs” are below a particular threshold. This is done via interworking of the VNF manager with the Orchestrator (removing resources) and EMS (inform the EMS about reduced number of VMs).
By adding new VMs. The EMS might deploy the SF Chains for which there is higher demand. Due to the reconfiguration described above all chains VM are to some extent equally loaded and the load has reached a particular threshold. This is done via standard scaling out operation of the VNF manager and interworking with the Orchestrator (allocating of resources) and EMS (informing about new VMs for SF chaining).
The “outer loop” is responsible for the resource allocation on the higher level (deploying the overall number of VMs that can be distributed differently among required SFCs), whereas the fine grained resource allocation and adjustments are done in the “inner loop” (reuse of existing resource) with the aim to avoid unnecessary triggering of the “outer loop”. However, in order to optimize the resource utilization the operation of the “inner control loop” and “outer control loop” may be tightly correlated. In other words, the inner loop may trigger the outer loop once some predefined conditions are met.
As an example of such conditions one may consider the following scenario. Once the EMS detects that X number of SFC-VMs have reached “min threshold” and Y number of SFC-VMs have reached “max threshold” where X is much lower than Y and that condition stays valid for more than Z time the outer loop should trigger the addition of new VMs. If the condition that X is much higher than Y is satisfied the “outer loop” should trigger the removal of VMs.
The conditions under which the “outer loop” is activated may depend on the available overall resources, defined policies, and desired level of orchestration actions. E.g. if releasing of VMs is not critical then the EMS can be instructed to trigger the scaling down outer loop only if very large number of VMs is free. Also if the actions from the Orchestrator needs to be minimized the scaling out will be triggered only if some very critical threshold is reached, otherwise the “inner loop” will try to reuse the resources as much as possible.
The EMS and VNF Manager can also learn the typical traffic profiles and proactively react by allocating the VMs with deployed SF chains according to the expected traffic characteristics. E.g if during the night there is higher demand for video optimizing SF chains the EMS reconfigure that particular SF chain on the higher number of already existing VMs.
As another option, the outer loop (e.g. the orchestrator) may observe how often the inner loop reconfigures VMs. If the reconfiguration frequency is relatively high, it may assume that the system is not stable because VM capacity is missing in the inner loop. In this case, according to some embodiments of the invention, the outer loop may increase the capacity for service chaining. E.g., the outer loop may add one or more VM for service chaining or increase the capacity of one or more VMs assigned to service chaining.
In order to monitor the reconfiguration frequency, the outer loop may either monitor the VMs or it may monitor the reconfiguration commands issued by the control entity of the inner loop.
As a result of such a process different number of VMs for different SF Chains may be instantiated depending on the load requirements. The number of VMs and the type of the SF chains that are deployed on them may be dynamically adjusted based on the SFC-VMs utilization reports.
The apparatus comprises demand detecting means 10 and reconfiguring means 20. The demand detecting means 10 and reconfiguring means 20 may be a demand detecting circuitry and reconfiguring circuitry, respectively.
The demand detecting means 10 detects if a demand for a service function chain exceeds a capacity of one or more virtual machines (S10). The capacity may be a capacity for the service function chain or a total capacity of the virtual machines. Each of the one or more virtual machines belongs to a group of virtual machines, wherein each of the virtual machines of the group is assigned to be configured to run a respective instance of the service function chain and respective instances of all service functions making up the service function chain; i.e., the VMs of the group are SFC-VMs.
If the demand exceeds the capacity (S10=“yes”), the reconfiguring means 20 reconfigures an additional virtual machine of the group to run a respective instance of the service function chain (S20). In addition, since each of the SFC-VMs is configured to run all SFs of a SFC, it reconfigures the VM to run respective instances of all the service functions making up the service function chain (S20). The additional virtual machine is different from each of the one or more virtual machines that were configured to run an instance of the SFC before the reconfiguration.
The apparatus comprises invoking frequency detecting means 110 and reconfiguring means 120. The invoking frequency detecting means 110 and reconfiguring means 120 may be an invoking frequency detecting circuitry and reconfiguring circuitry, respectively.
The invoking frequency detecting means 110 detects if an invoking frequency of invoking an instance of a service function chain in a virtual machine is below a lower invoking frequency threshold (S110). The virtual machine is a SFC-VM, i.e. the virtual machine is configured to run the instance of the service function chain and respective instances of all service functions making up the service function chain.
If the invoking frequency is below the lower invoking frequency threshold (S110=“yes”), the reconfiguring means 120 reconfigures the virtual machine such that it is not configured to run the instance of the service function chain any more (S120).
The apparatus comprises load detecting means 210 and reconfiguring means 220. The load detecting means 210 and reconfiguring means 220 may be a load detecting circuitry and reconfiguring circuitry, respectively.
The load detecting means detects if a load in a virtual machine is below a lower load threshold (S210). The virtual machine is a SFC-VM, i.e. the virtual machine is configured to run the instance of the service function chain and respective instances of all service functions making up the service function chain. In some embodiments, the VM may be configured to run only one SFC at a time.
If the load is below the lower load threshold (S210=“yes”), the reconfiguring means 220 reconfigures, the virtual machine such that it is not configured to run the instance of the service function chain any more (S220).
The apparatus comprises reconfiguration monitoring means 310 and adding means 320. The reconfiguration monitoring means 310 and adding means 320 may be a reconfiguration monitoring circuitry and adding circuitry, respectively.
The reconfiguration monitoring means 310 monitors if one or more virtual machines are instructed to be reconfigured with a reconfiguration frequency higher than an upper reconfiguration frequency threshold (S310). Here, reconfiguring means a change of the configuration to run a set of service chains. I.e., reconfiguration includes at least one of configuring to run an instance of a first service function chain it has not been running when the reconfiguration instruction is received, and configuring not to run an instance of a second service function chain it has been running when the reconfiguration instruction is received. Each of the one or more virtual machines belongs to a group of virtual machines, wherein each of the virtual machines of the group is assigned to be configured to run a respective instance of group service function chains and respective instances of all service functions making up the respective service function chain. Each of the sets of service chains consists of one or more of the group service function chains.
The adding means 320 adds, if the reconfiguration frequency is higher than the upper reconfiguration threshold, virtual machine capacity to the group (e.g. by adding one or more VMs to the group and/or by increasing capacity of one or more VMs of the group) (S320).
Some embodiments of the invention may be employed in a 3GPP network. They may be employed also in other 3GPP and non-3GPP mobile and fixed networks such as CDMA, EDGE, LTE, LTE-A, UTRAN, WiFi, WLAN networks, PSTN, etc. That is, embodiments of the invention may be employed regardless of the access technology.
One piece of information may be transmitted in one or plural messages from one entity to another entity. Each of these messages may comprise further (different) pieces of information.
Names of network elements, protocols, and methods are based on current standards. In other versions or other technologies, the names of these network elements and/or protocols and/or methods may be different, as long as they provide a corresponding functionality.
If not otherwise stated or otherwise made clear from the context, the statement that two entities are different means that they perform different functions. It does not necessarily mean that they are based on different hardware. That is, each of the entities described in the present description may be based on a different hardware, or some or all of the entities may be based on the same hardware. It does not necessarily mean that they are based on different software. That is, each of the entities described in the present description may be based on different software, or some or all of the entities may be based on the same software.
According to the above description, it should thus be apparent that example embodiments of the present invention provide, for example a control apparatus such as a EMS or a VNF Manager, or a component thereof, an apparatus embodying the same, a method for controlling and/or operating the same, and computer program(s) controlling and/or operating the same as well as mediums carrying such computer program(s) and forming computer program product(s).
Implementations of any of the above described blocks, apparatuses, systems, techniques, means, devices, or methods include, as non-limiting examples, implementations as hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, a virtual machine, or some combination thereof.
It is to be understood that what is described above is what is presently considered the preferred embodiments of the present invention. However, it should be noted that the description of the preferred embodiments is given by way of example only and that various modifications may be made without departing from the scope of the invention as defined by the appended claims.
It will be obvious to a person skilled in the art that, as the technology advances, the inventive concept can be implemented in various ways. The invention and its embodiments are not limited to the examples described above but may vary within the scope of the claims.
3GPP 3rd Generation Partnership Project
5 G 5th Generation
API Application Program Interface
BSS Business Support System
CPU Central Processing Unit
Ctrl Control
DC Data Centre
DL downlink
DPDK Intel Data Plane Development Kit
EDGE Enhanced Datarate for GSM Evolution
EMS Element Management System
EPC Evolved Packet Core
ETSI European Telecommunications Standards Institute
GPRS Generic Packet Radio Service
GSM Global System for Mobile Communication
ID Identifier
IETF Internet Engineering Task Force
IP Internet Protocol
IT Information Technology
LB Load Balancer
LTE Long Term Evolution
LTE-A LTE Advanced
NAT Network Address Translation
NFV Network Function Virtualization
NFVI NFV Infrastructure
NORMA NOvel Radio Multiservice adaptive network Architecture
OS Operating System
OSS Operation Support System
PCRF Policy and Charging Rules Function
PSTN Public Switched Telephone Network
Rel Release
SDN Software-defined networking
SF Service Function
SFC Service Function Chaining
SFF Service Forwarder Function
SR-IOV Single Root I/O Virtualization
SW Software
TCO Total Cost of Ownership
TCP Transmission Control Protocol
TS Technical Specification
UE User Equipment, mobile device
UL uplink
UMTS Universal Mobile Telecommunications System
URL Uniform Resource Locator
VIM Virtual Infrastructure Manager
VM Virtual Machine
VNF Virtualized Network Function
vNIC Virtual Network Interface Card
VoIP Voice over IP
WiFi Wireless Fidelity
WLAN Wireless Local Area Network
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/EP2015/067832 | 8/3/2015 | WO | 00 |