The present disclosure relates generally to management of network function virtualization in a communication network and, more particularly, to automation of network function management.
Network Function Virtualization (NFV) is a virtualization technology for communication networks that eschews proprietary hardware and implements network functions (NFs) in the communication network as software running on industry-standard servers supported by high volume storage. A virtual network function (VNF) or containerized network function (CNF) implemented as a software component can be deployed at essentially any location within a cloud infrastructure without the need of installing new equipment. NFV enables rapid deployment of new services, ease of scalability, and reduced redundancy in the NFs.
NFV can be used in conjunction with software-defined network (SDNs) to automate deployment and life cycle management of VNFs/CNFs. When new VNF/CNF instances are deployed, the new VNF/CNF instance needs to be configured according to the specifications of the network operator. To enable self-configuration of VNFs/CNFs following deployment, either the configuration or a reference to the configuration is injected into a VM or container for the VNF/CNF when the VNF/CNF is being created. The reference can comprise, for example, a uniform resource locator (URL) that the VNF/CNF can use to download the configuration from external data storage. Known techniques for injecting a configuration into a VNF/CNF include the use of deployment temples, configuration drives and Helm charts. Once deployed, the VNF/CNF may be upgraded from time to time during the life cycle of the VNF/CNF. For this purpose, the VNF/CNF may have a North-bound interface (NBI) that enables the network operator to change the configuration of a running VNF/CNF.
While complete end-to-end automation throughout the life cycle of the VNF/CNF is a goal, certain aspects of VNF/CNF management have proven difficult to fully achieve without some human intervention. In a cloud-based system, hardware and software may fail, or may need to be upgraded resulting in service disruption. Following a service disruption, the VNF/CNF needs to be restored from backup or re-deployed when the VM for a VNF or a container for a CNF is restored to an active state. In cloud-based systems, mechanisms for rapid restoration and re-deployment become increasingly important because the VNF/CNF is running on one or two layers of cloud infrastructure that can fail or need to be upgraded resulting in more frequent restoration or deployment of the VNF/CNF during the life cycle of the VNF/CNF. After restoration or re-deployment of the VNF/CNF, the VNF/CNF will need to be configured. If the configuration of the VNF/CNF has been updated, the network operator will typically want to use the most recent configuration. If the VNF/CNF is re-deployed using the same deployment templates/Helm charts used in the original deployment, any changes to the configuration made subsequent to the original deployment will be lost. While the network operator could change the deployment templates/Helm charts when the configuration of the VNF/CNF is changed, updating the deployment templates/Helm charts breaks the end-to-end automation. If the network operator has advance notice of the service disruption, the network operator could backup up the configuration to external storage and use the saved configuration to restore the VNF/CNF to the most recent configuration. This approach requires the network operator to intervene to make the backup and/or to change deployment temples/Helm charts to include a reference to the backup thus breaking the end-to-end automation.
The present disclosure provides a mechanism for deploying and re-deploying VNFs and CNFs in a cloud-based communication network that enables automated configuration of VNFs/CNFs throughout the life cycle of the VNF/CNF. Elements of the solution comprise:
A first aspect of the disclosure comprises methods implemented by an interface controller of storing a configuration for a NF in a communication network. In one embodiment, the method comprises connecting to a data storage configured to store two or more instances of a configuration for the NF. The two or more instances include a base instance and one or more updated instances derived from the base instance. The method further comprises associating each of the two or more instances with a common configuration reference to create a configuration group for the NF. The method further comprises, for each instance in the configuration group, associating version control data with such instance that uniquely identifies a version of the configuration represented by such instance. The method further comprises receiving, from the NF, a download request message requesting the current configuration for the NF, the download request message including the common configuration reference. The method further comprises retrieving, from the data storage, an instance representing the current configuration for the NF selected from the configuration group based on the version control data, and sending, to the NF, the instance representing the current configuration for the NF.
A second aspect of the disclosure comprises methods implemented by a NF in a communication network. the one embodiment, the method comprises receiving a configuration reference associated with a configuration group including two or more instances of a configuration for the NF stored in a data storage. The two or more instances including a base instance and one or more updated instances derived from the base instance. Each instance in the configuration group is associated with version control data that uniquely identifies a version of the configuration represented by such instance. The method further comprises sending, to an interface controller for the data storage, a download request message requesting a current configuration for the NF, the download request message including the configuration reference, and receiving, from the interface controller, an instance of the configuration representing a current configuration for the NF.
A third aspect of the disclosure comprises an interface controller configured to store a configuration for a NF in a communication network. The interface controller comprises a connecting unit configured to connect to a data storage configured to store two or more instances of a configuration for the NF. The two or more instances include a base instance and one or more updated instances derived from the base instance. The interface controller further comprises a referencing unit configured to associate each of the two or more instances with a common configuration reference to create a configuration group for the NF. The interface controller further comprises a versioning unit configured to, for each instance in the configuration group, associate version control data with such instance that uniquely identifies a version of the configuration represented by such instance. The interface controller further comprises a receiving unit configured to receive a download request message requesting the current configuration for the NF. The download request message includes the common configuration reference. The interface controller further comprises a retrieving unit configured to retrieve, from the data storage, an instance representing the current configuration for the NF selected from the configuration group based on the version control data. The interface controller further comprises a sending unit configured to send, to the NF, the instance representing the current configuration for the NF.
A fourth aspect of the disclosure comprises a NF in a communication network. The NF comprises a receiving unit configured to receive a configuration reference associated with a configuration group including two or more instances of a configuration for the NF stored in a data storage. The two or more instances including a base instance and one or more updated instances derived from the base instance. Each instance in the configuration group is associated with version control data that uniquely identifies a version of the configuration represented by such instance. The NF further comprises a sending unit configured to send, to an interface controller for the data storage, a download request message requesting a current configuration for the NF, the download request message including the configuration reference, and receiving, from the interface controller, an instance of the configuration representing a current configuration for the NF.
A fifth aspect of the disclosure comprises an interface controller configured to store a configuration for a NF in a communication network. The interface controller comprises communication circuitry for communicating with NF and processing circuitry. The processing circuitry is configured to connect to a data storage configured to store two or more instances of a configuration for the NF. The two or more instances include a base instance and one or more updated instances derived from the base instance. The processing circuitry is further configured to associate each of the two or more instances with a common configuration reference to create a configuration group for the NF. The processing circuitry is further configured, for each instance in the configuration group, associate version control data with such instance that uniquely identifies a version of the configuration represented by such instance. The processing circuitry is further configured receive a download request message requesting the current configuration for the NF. The download request message includes the common configuration reference. The processing circuitry is further configured retrieve, from the data storage, an instance representing the current configuration for the NF selected from the configuration group based on the version control data. The processing circuitry is further configured to send, to the NF, the instance representing the current configuration for the NF.
A sixth aspect of the disclosure comprises a NF in a communication network. The network comprises communication circuitry for communicating an interface controller for a data storage and processing circuitry. The processing circuitry is configured to receive a configuration reference associated with a configuration group including two or more instances of a configuration for the NF stored in a data storage. The two or more instances including a base instance and one or more updated instances derived from the base instance. Each instance in the configuration group is associated with version control data that uniquely identifies a version of the configuration represented by such instance. The processing circuitry is further configured send, to an interface controller for the data storage, a download request message requesting a current configuration for the NF, the download request message including the configuration reference, and receiving, from the interface controller, an instance of the configuration representing a current configuration for the NF.
A seventh aspect of the disclosure comprises a computer program for an interface controller configured to store a configuration for a NF in a communication network. The computer program comprises executable instructions that, when executed by processing circuitry in the interface controller causes the interface controller to perform the method according to the first aspect.
An eighth aspect of the disclosure comprises a carrier containing a computer program according to the seventh aspect. The carrier is one of an electronic signal, optical signal, radio signal, or a non-transitory computer readable storage medium.
A ninth aspect of the disclosure comprises a computer program for a NF in a communication network. The computer program comprises executable instructions that, when executed by processing circuitry in a NF causes the NF to perform the method according to the second aspect.
A tenth aspect of the disclosure comprises a carrier containing a computer program according to the ninth aspect. The carrier is one of an electronic signal, optical signal, radio signal, or a non-transitory computer readable storage medium.
A NFV Management and Orchestration (MANO) architecture provides a framework for the management and orchestration of the network resources including computing, networking, storage, virtual machine (VM) and container resources. NFV MANO includes three main components: a Virtualized Infrastructure Manager (VIM) 40, a NFV Orchestrator 45 and a Virtual NF Manager (VNFM) 50. The VIM 40 is responsible for controlling, managing, and monitoring the compute, storage, and network hardware provided by the datacenter. The NFV Orchestrator 45 is responsible for resource coordination including the creation, allocation and termination of VMs and containers that are used by the VNFs. The VNFM 50 is responsible for the life cycle management of VNFs and CNFs. VNFM 50 operations include instantiation of VNFs/CNFs 25, 35, scaling of VNFs/CNFs 25, 35, updating and/or upgrading VNFs/CNFs 25, 35 and termination of VNFs/CNFs 25, 35. A VNFM 50 can be assigned to either a single VNF/CNF 25, 35 instance or to multiple VNF/CNF 25, 35 instances. The VNFs/CNFs 25, 35 managed by a VNFM 50 can be all of the same type of NF or a mix of different types of NFs.
VNFMs 50 can be used in conjunction with software-defined network (SDNs) to automate deployment and life cycle management of VNFs/CNFs 25, 35. Generic and vendor-specific VNFMs 50 coordinate with the IaaS and CaaS infrastructures and provide automated workflows for routine management functions, such as deploying and upgrading VNFs/CNFs 25, 35 that hide the complexity of the VNF/CNF. When new VNF/CNF 25, 35 instances are deployed, the new VNF/CNF 25, 35 instance needs to be configured according to the specifications of the network operator. To enable self-configuration of VNFs/CNFs 25, 35, either the configuration or reference to a configuration is injected into the VM or container when the VNF/CNF 25, 35 is being deployed. The reference can comprise, for example, a uniform resource locator (URL) or uniform resource indicator (URI) that the VNF/CNF 25, 35 can use to download the configuration from external data storage. Known techniques for injecting a configuration into a VNF/CNF 25, 35 include the use of deployment templates, configuration drives and Helm charts. Once deployed, the VNF/CNF 25, 35 may be upgraded from time to time during the life cycle of the VNF/CNF 25, 35. For this purpose, the VNF/CNF 25, 35 may have a North-Bound Interface (NBI) that enables the network operator to change the configuration of a running VNF/CNF 25, 35.
While complete end-to-end automation throughout the life cycle of the VNF/CNF 25, 35 is a goal, certain aspects of VNF/CNF 25, 35 management have proven difficult to fully achieve without some human intervention. In a cloud-based system, hardware and software may fail, or may need to be upgraded resulting in service disruption. Following a service disruption, the VNF/CNF 25, 35 needs to be restored from backup or re-deployed when the VM or container is restored to an active state. In cloud-based systems, mechanisms for rapid restoration and re-deployment become increasingly important because the VNF/CNF 25, 35 is running on one or two layers of infrastructure that can fail or need to be upgraded resulting in more frequent restoration or deployment of the VNF/CNF 25, 35 during the life cycle of the VNF/CNF 25, 35. After restoration or re-deployment of the VNF/CNF 25, 35, the VNF/CNF 25, 35 will need to be configured. If the configuration of the VNF/CNF 25, 35 has changed, the network operator will typically want to use the most recent configuration. If the VNF/CNF 25, 35 is re-deployed using the same deployment templates/Helm charts used in the original deployment, any changes to the configuration made subsequent to the original deployment will be lost. While the network operator could change the deployment templates/Helm charts when the configuration of the VNF/CNF 25, 35 is changed, updating the deployment templates/Helm charts breaks the end-to-end automation. When the network operator has advance notice of the service disruption, the network operator could backup up the configuration to external storage and use the saved configuration to restore the VNF/CNF 25, 35 to the most recent configuration. This approach requires the network operator to intervene to make the backup and/or to change deployment temples/Helm charts to include a reference to the backup thus breaking the end-to-end automation.
One aspect of the present disclosure is to provide version controlled data storage 100 that enables automated configuration of VNFs/CNFs 25, 35 following deployment or re-deployment of VNF/CNF 25, 35 instances. Elements of the version controlled data storage 100 comprise:
Referring to
Version control logic 140 implements a version control scheme to enable automatic life cycle management. Metadata associated with the different versions of a NF configuration indicates which of two or more different versions is the current version. The collection of different versions of a NF configuration associated with the same configuration reference provide a complete history of the NF configuration, thus allowing rollback to any previous version of the NF configuration by simply modifying the metadata to mark that version as the current version.
When a VNF/CNF 25, 35 is created, a configuration reference is injected into the VNF/CNF. Due to the version-controlled data storage 100, the configuration reference used during the initial day-0 deployment does not change over the lifetime of the VNF/CNF 25, 35 although the configuration of the VNF/CNF 25, 35 can change. Each version of a NF configuration for a VNF/CNF 25, 35 is stored as a separate immutable configuration artifact in the generic object storage 110 and is associated with the same configuration reference. Metadata associated with each configuration artifact allows the interface controller 120 to retrieve the configuration artifact representing the most recent, or current, version of the NF configuration if it needs to be re-deployed or restored. Thus, the same configuration reference can be used throughout the life cycle of the VNF/CNF 25, 35 to retrieve the most current version of the NF configuration when a VNF/CNF 25, 35 is re-deployed following an infrastructure upgrade or service incident. In practice, the configuration reference can be, for example, an URL.
As an example, a VNF/CNF 25, 35 may have 10 configuration artifacts associated with its configuration reference, all tagged with its unique name “NF-X.” One of the ten configuration artifacts can be tagged as “current” in the metadata. When NF-X is re-deployed, it uses the configuration reference provided by the VNFM 50 during deployment to query the API 130 for the configuration data. In response to the query, the interface controller 120 retrieves the configuration artifact tagged as the “current” version of the NF configuration and returns it to the VNF/CNF 25, 35.
1. The network operator creates the initial NF configuration and stores it to an version controlled data storage 100. This configuration is downloadable via URL1.
2. The network operator deploys the VNF/CNF 25, 35 and injects URL1 into the VM or container at deployment time. The configuration reference (URL1) is injected in via deployment templates (e.g., Heat orchestration templates, OVF-files, config drive, etc.) in case of VNF 25 and values.yaml/Helm charts or similar method in case of a CNF 35. The files used in deployment (or re-deployment) refer to URL1. The VNFM 50 or orchestrator could also be used to inject such data but problem remains the same. There is some file or descriptor that refers to URL1.
3. When VNF/CNF 25, 35 initializes, it uses URL1 to download the configuration from the version controlled data storage 100 and self-configures accordingly.
4. After the initial deployment, the network operator changes the configuration of the VNF/CNF 25, 35 and the change is backed up to external storage. The latest configuration is now available via reference URL2.
5. Cloud outage results in service disruption of the VNF/CNF 25, 35 requiring re-deployment.
6. The network operator re-deploys the VNF/CNF 25, 35 using one of two methods. In a first method, the network operator manually changes the configuration reference in the deployment templates/charts from URL1 to URL2. This approach requires manual intervention and breaks the end-to-end automation. In a second method, the network operator re-deploys the VNF/CNF 25, 35 using the original deployment templates/charts that include URL1. This approach does not require manual intervention, but the network operator loses the configuration changes done in step 4.
Now, the same scenario is described using version controlled data storage 100 as herein described.
1. The network operator creates initial configuration and stores it to an version controlled data storage 100. This configuration is downloadable via URL1.
2. The network operator deploys the VNF/CNF 25, 35 and injects URL1 into the VM or container at deployment time. The configuration reference (URL1) is injected in via deployment templates (e.g., Heat orchestration templates, OVF-files, config drive, etc.) in case of VNF and values.yaml/Helm charts or similar method in case of a CNF. The files used in deployment (or re-deployment) refer to URL1. The VNFM 50 or orchestrator could also be used to inject such data.
3. When VNF/CNF 25, 35 initializes, it will use URL1 to download the configuration and self-configure accordingly.
4. After the initial deployment, the network operator changes the configuration of the VNF/CNF 25, 35 and the change is backed up to external storage. The new version is associated with URL1 and tagged as “current” by the version control logic 140. Because of version control, there can be multiple configuration artifacts associated with URL1 representing different versions of the NF configuration stored simultaneously in the data storage. In this example, there 2 configuration artifacts available—the original (version1) created by the network operator during the initial deployment and version2 containing the configuration changes made by the network operator. Each configuration artifact in version control is tagged with metadata (like for example name of the VNF/CNF 25, 35) and an attribute indicating whether it is currently active or not. In other words, the tag “current” defines to which configuration artifact URL1 points to. In this case, now that NF configuration was changed, and VNF/CNF 25, 35 pushed the changed version into the version controlled data storage 100 (thus creating a new artifact, a new version of configuration called version2) and tagged it as current, now URL1 points to version2. Unlike the previous example, the latest configuration is now available via URL1.
5. Cloud outage results in service disruption requiring re-deployment of VNF/CNF 25, 35.
6. Operator re-deploys the VNF/CNF 25, 35 using the same deployments templates/charts used in the initial deployment as before. The deployment templates/charts refer to URL1. In this case, the version controlled data storage 100 ensures that URL1 points to version2 now, and not version1 (old version) so there is no need for manual intervention or any loss of data.
The version controlled data storage 100 as herein described also enables centralized management of configuration artifacts which can lead to significant saving in operating expenses for customer: For example, the network operator may have a large number of VNFs/CNFs 25, 35 that need to be changed in order to upgrade a service or provide a new service. Currently, the network operator needs to manually edit the configuration in all VNFs/CNFs 25, 35 in order to maintain consistency and uniformity, which can be time consuming and expensive. When using version controlled data storage 100 as herein described, this kind of upgrade can easily be managed in centralized way. To maintain consistency across many VNFs/CNFs 25, 35, the same configuration reference can be injected into each of the VNFs/CNFs 25, 35 that share a common configuration. Instead of manually editing the configuration for each VNF/CNF 25, 35, the network operator can push a new configuration artifact to the version controlled data storage 100 and tag it as “current.” VNFs/CNFs 25, 35 can poll the version controlled data storage 100 to determine whether the deployed version of the NF configuration is current. If not, the VNF/CNF 25, 35 can download the most recent configuration and configure itself accordingly. The network operator dos does not need to log into VNFs/CNFs 25, 35 individually, or worry about whether all VNFs/CNFs 25, 35 are updated. The network operator simply needs to push the new configuration into the version controlled data storage 100 and the configuration changes will be propagated to all VNFs/CNFs 25, 35 injected with the same configuration reference. The automated propagation of the changes can save as 90% of the typically cost of making configuration changes.
The techniques as herein described fully align initial deployment and re-deployment/restoration of the VNF/CNF 25, 35 as use cases/life cycle management operations. Thus, there will be fewer use cases to develop and maintain, and fewer use cases for network operator/user to learn. No manual work or manual intervention is required to automatically re-deploy a VNF/CNF 25, 35 after, for example, an infrastructure upgrade or infrastructure-related incident.
The version control can be implemented as a separate stateless NF or as a microservice by using either tags, metadata or combination of both. The metadata is associated with the configuration artifacts and stored in the version controlled data storage 100.
Using a vendor-specific API and version-control logic as a “gateway” between VNFs/CNFs 25, 35 and data storage (potentially offered by infrastructure) is safer for the NF vendor because the vendor controls the configuration references and the API and avoids costly changes to automation machinery introduced in third party products.
NF configuration management is fully automatic in the life cycle management context enabling the use of both vendor-specific or generic VNFMs 50.
Storage of configurations in version-controlled environment as immutable artifacts enables easy centralized configuration rollback/roll-forward.
The API and version control logic enables centralized management of configuration artifacts used by multiple NFs in the network enabling the network operator to add a configuration artifact in the version controlled data storage 100 to trigger automatic re-configuration of the VNFs/CNFs 25, 35 in the network which have been injected with a configuration reference associated to the changed configuration artifact.
The version controlled data storage 100 as herein described does not require or have any specific needs when it comes to the format of the stored data. It is fully up to the network operator to determine what data to store and the data format. The data can be encrypted or plain text and therefore this gives full flexibility to application developers for CNFs/VNFs to use this function.
Some embodiments of the method 300 further comprise receiving, from the NF, one or more changes to the current configuration of the NF; storing, in the data storage, a new instance of the configuration based on the changes received from the NF; associating the new instance with the common configuration reference; and associating version control data with the new instance identifying the new instance as the current configuration of the NF.
Some embodiments of the method 300 further comprise modifying the version control data of an instance in the configuration group representing the previous current configuration deprecated by the new instance.
Some embodiments of the method 300 further comprise following a restart of the NF, receiving a second download request message requesting the current configuration for the NF, the second download request message including the configuration reference; retrieving, from the data storage, the new instance of the configuration for the NF as the current configuration; and sending, to the NF, the new instance of the configuration for the NF.
Some embodiments of the method 300 further comprise receiving a status request message from the NF, the status request message including the configuration reference; responsive to the status request message, determining the current version of the configuration for the NF based on the version control information associated with the instances in the configuration group associated with the configuration reference; and sending, to the NF, a status response including an indication whether the currently deployed configuration for the NF is the current configuration.
In some embodiments of the method 300, the status request includes an identifier for the currently deployed version of the configuration and the status response includes a Boolean attribute set to a first value to indicate that the deployed version is the current version and set to a second value to indicate that the deployed version is not the current version.
In some embodiments of the method 300, the status response includes an identifier of the current version of the configuration in the group associated with the configuration reference.
In some embodiments of the method 300, the version control information comprises a tag set to a first value to indicate that the version represented by the instance is the current version of the configuration and set to a second value to indicate that the version represented by the instance is not the current version of the configuration.
In some embodiments of the method 300, the version control information comprises an identifier that uniquely identifier a version of the configuration in a configuration group associated with the configuration reference.
In some embodiments of the method 300, the version control information comprises a timestamp.
In some embodiments of the method 300, the version control information comprises a service type identifier identifying a service associated with the instance.
In some embodiments of the method 300, the version control information comprises a control attribute for controlling management operations on the instance.
In some embodiments of the method 300, the configuration reference comprises a uniform resource indicator or uniform resource locator.
Some embodiments of the method 400 further comprise sending, to the interface controller 120, one or more changes to a deployed configuration of the NF to be stored in the data storage as a new instance representing a changed version of the configuration for the NF. The changed version is associated by the interface controller 120 with version control data identifying the charged version of the configuration as the current version.
Some embodiments of the method 400 further comprise, following a restart of the NF, sending a second download request message requesting a current configuration for the NF, the second download request message including the configuration reference; and receiving, from the interface controller 120, the new instance of the configuration for the NF as the current configuration.
Some embodiments of the method 400 further comprise sending a status request message to the interface controller 120, the status request message including the configuration reference; receiving, from the NF, an status response including an indication whether the currently deployed configuration for the NF is the current configuration.
In some embodiments of the method 400, the status request includes an identifier for the currently deployed version of the configuration and the status response includes a Boolean attribute set to a first value to indicate that the deployed version is the current version and set to a second value to indicate that the deployed version is not the current version.
In some embodiments of the method 400, the status response includes an identifier of the current version of the configuration in the configuration group associated with the configuration reference.
In some embodiments of the method 400, the configuration reference comprises a uniform resource indicator or uniform resource locator.
The communication circuitry 520 comprises circuitry for communicating with other network devices over a communication network. The processing circuitry 530 controls the overall operation of the network node 500 and implements one or more of the procedures as herein described. The processing circuitry 530 may comprise one or more microprocessors, hardware, firmware, or a combination thereof. The processing circuitry 530 is configured to perform the methods as herein described.
Memory 540 comprises both volatile and non-volatile memory for storing computer program code and data needed by the processing circuitry 530 for operation. Memory 540 may comprise any tangible, non-transitory computer-readable storage medium for storing data including electronic, magnetic, optical, electromagnetic, or semiconductor data storage. Memory 540 stores a computer program 550 comprising executable instructions that configure the processing circuitry 530 to implement the procedures and methods as described herein. A computer program 550 in this regard may comprise one or more code modules corresponding to the means or units described above. In general, computer program instructions and configuration information are stored in a non-volatile memory, such as a ROM, erasable programmable read only memory (EPROM) or flash memory. Temporary data generated during operation may be stored in a volatile memory, such as a random access memory (RAM). In some embodiments, computer program 550 for configuring the processing circuitry 530 as herein described may be stored in a removable memory, such as a portable compact disc, portable digital video disc, or other removable media. The computer program 550 may also be embodied in a carrier such as an electronic signal, optical signal, radio signal, or computer readable storage medium.
Those skilled in the art will also appreciate that embodiments herein further include corresponding computer programs. A computer program comprises instructions which, when executed on at least one processor of an apparatus, cause the apparatus to carry out any of the respective processing described above. A computer program in this regard may comprise one or more code modules corresponding to the means or units described above.
Embodiments further include a carrier containing such a computer program. This carrier may comprise one of an electronic signal, optical signal, radio signal, or computer readable storage medium.
In this regard, embodiments herein also include a computer program product stored on a non-transitory computer readable (storage or recording) medium and comprising instructions that, when executed by a processor of an apparatus, cause the apparatus to perform as described above.
Embodiments further include a computer program product comprising program code portions for performing the steps of any of the embodiments herein when the computer program product is executed by a computing device. This computer program product may be stored on a computer readable recording medium.
Additional embodiments will now be described. At least some of these embodiments may be described as applicable in certain contexts and/or wireless network types for illustrative purposes, but the embodiments are similarly applicable in other contexts and/or wireless network types not explicitly described.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/IB2020/055328 | 6/5/2020 | WO |