Benefit is claimed under 35 U.S.C. 119(a)-(d) to Foreign Application Serial No. 202141020239 filed in India entitled “AUTOMATED REFERENCING AND RESOLUTION OF PROPERTIES ACROSS VIRTUAL NETWORK FUNCTIONS AND NETWORK SERVICE”, on May 3, 2021, by VMware, Inc., which is herein incorporated in its entirety by reference for all purposes.
An architectural framework, known as network functions virtualization (NFV), has been developed to enable the telecommunication industry to deliver network services with enhanced agility, rapid innovation, better economics and scale. The NFV platform dramatically simplifies delivery of network functions that support the network services by implementing virtual network functions (VNFs) that are delivered through software virtualization on standard hardware. As a result, network service providers have been able to quickly adapt to the on-demand, dynamic needs of telecommunications traffic and services.
A simplified block diagram of the NFV platform is illustrated in
NFVI 10 may be deployed in a multi-tenant cloud computing environment and
Using the NFV platform of
The first phase of VNF deployment is onboarding, which involves getting the VNF package from a vendor of the VNF. The VNF package includes a VNF descriptor which describes the properties of the VNF, a VNF manager, and element management system, and installing them in the NFV platform. The VNF manager is software that is executed to deploy the VNF in NFVI 10 by issuing API calls to VIM 30. As such, a different VNF manager is developed for each of different types of VIM 30 or different software versions of the same type of VIM 30. Virtualized infrastructure management software developed, released, and branded under different names are referred to herein as different “types” of VIM 30. Some examples of different types of VIM 30 are VMware vCloud Director®, OpenStack®, and Kubernetes®. The element management system is software that is executed to manage the configuration of the VNF after the VNF has been instantiated.
The second phase of VNF deployment is instantiation. After the VNF package has been installed in the NFV platform, the VNF manager of that package is executed to instantiate the virtual machines that will function as VNFs according to the requirements specified in the VNF descriptor. More specifically, the VNF manager makes API calls to VIM 30 and VIM 30 communicates with virtualization manager 20 to instantiate and configure the virtual machines that are needed for the VNF. The API call that is made to configure the virtual machines includes a pointer to the element management system. The virtual machines communicate with the element management system to receive initial configuration parameters as well as configuration changes during the lifecycle of the VNF.
To meet the speed and latency goals of 5G networks, VNFs are being deployed as close to the end users as possible. As such, 5G networks employ a far greater number of radio towers, edge compute sites, and regional compute sites than prior generation networks. Scaling a platform that supports deployment of VNFs across hundreds of compute sites to one that supports deployment of VNFs across thousands, even tens of thousands, of sites has proven to be challenging and requires improvements to the NFV platform.
Accordingly, an improved network functions virtualization platform that is capable of supporting deployment of VNFs across a large number of sites, and that enables 5G compliant network services to be provisioned to end users, has been developed. The improved network function virtualization platform employs a distributed orchestration framework using which virtual network functions may be deployed across a hybrid cloud infrastructure that include cloud computing data centers of different types (i.e., cloud computing environments that employ either a different version of the same type of cloud computing management software or different types of cloud computing management software), under the control of a central orchestrator, so as to facilitate deployment of VNFs across thousands, even tens of thousands, of sites.
One or more embodiments provide a method of executing interfaces of a network service and virtual network functions of the network service within this improved network functions virtualization platform. The executed interfaces are defined in descriptors of the network service and the virtual network functions to enable cross-referencing of properties between them. The method, according to one embodiment, includes the steps of: retrieving a descriptor of a network service; determining from the descriptor of the network service, virtual network functions associated with the network service including first and second virtual network functions, and that the second virtual network function references an output of an interface of the network service; during execution of the interface of the network service, storing the output thereof that is referenced by the second virtual network function; and retrieving the output of the interface of the network service that has been stored as an input to an interface of the second virtual network function.
Further embodiments include a non-transitory computer-readable storage medium comprising instructions that cause a computer system to carry out the above methods, as well as a computer system configured to carry out the above methods.
VNFs are deployed across the data centers of the 5G network and even onto hardware mounted on radio towers 104. Examples of VNFs that are deployed include User Plane Function (UPF), Enhanced Packet Core (EPC), IP Multimedia Subsystem (IMS), firewall, domain name system (DNS), network address translation (NAT), network edge, and many others. To achieve the speed and latency goals of 5G networks, some of these VNFs, such as UPF, need be located as close to the end users as possible.
Orchestration server 201 provides a main management interface for a network service provider and, as depicted, has two software modules running therein, a central orchestrator 210 and multi-VIM adapter 220.
Central orchestrator 210 receives network service requests and relies on several data stores configured in non-volatile storage devices, to carry out its orchestration tasks. The first is network service (NS) catalog 211 which stores network service descriptors for all of the different network services that can be provisioned or has been provisioned by NFV platform 200. The second is VNF catalog 212 in which VNF descriptors of VNFs from various vendors are stored. A third data store illustrated in
Each VNF that needs to be deployed to support a network service goes through an onboarding phase. The onboarding phase involves getting a VNF package from a vendor of the VNF. The VNF package includes a VNF descriptor (VNFD), a VNF manager, and element management system (EMS), and installing them in NFV platform 200. VNFD is a file that describes the properties of the VNF, including resources needed (e.g., amount and type of virtual compute, storage, and network resources), software metadata (e.g., software version of the VNF), connectivity descriptors for external connection points, internal virtual links and internal connection points, lifecycle management behavior (e.g., scaling and instantiation), supported lifecycle management operations and their operations, supported VNF specific parameters, and affinity/anti-affinity rules. As described above, VNFDs are stored in VNF catalog 212. The VNF manager is proprietary software that the VNF vendor has developed for deploying the VNF onto conventional NVFI and is optionally provided in the embodiments so that it can be used to deploy the VNF onto conventional NVFI. The EMS is also proprietary software that the VNF vendor has developed to manage the configuration of a VNF after a virtual machine for the VNF has been instantiated. The virtual machine communicates with the EMS to receive initial configuration parameters as well as configuration changes during the lifecycle of the VNF.
For each network service request that central orchestrator 210 receives, central orchestrator 210 searches NS catalog 211 for a network service descriptor corresponding to the request. In general, a network service descriptor contains identification information of all the VNFs that are used by the network service, network connectivity requirements for the VNFs, CPU utilization and other factors related to performance of each virtual machine on which a VNF is to be deployed, and specifications on when to heal the VNFs and when to scale the network service. Upon completing a successful search, central orchestrator 210 retrieves the network service descriptor from NS catalog 211 and extracts information it needs to carry out the request.
The information extracted from the network service descriptor includes identification information of all of the VNFs that are used by the network service. For all such VNFs, central orchestrator 210 retrieves into memory the corresponding VNF descriptor from VNF catalog 212, and parses the VNF descriptors to extract information it needs to carry out the request. In particular, central orchestrator 210 generates commands for multi-VIM adapter 220 based on the extracted information and issues the commands to multi-VIM adapter. Multi-VIM adapter 220 then generates a set of generic commands to be issued to various, selected cloud computing data centers of the 5G network.
The commands generated by multi-VIM adapter 220 are generic in that they do not have to comply with any particular format required by cloud computing management software running the different cloud computing data centers. As such, the same set of commands may be sent to cloud computing data centers running different types of cloud computing management software and to cloud computing data centers running different versions of the same type of cloud computing management software. Because of this flexibility and the ubiquitousness of cloud computing data centers, network services that meet the performance requirements of 5G networks can be potentially rolled out without constructing new cloud computing data centers.
The 5G network depicted in
VIM 252a is a virtualized infrastructure management software executed in a physical or virtual server, that partitions the virtual compute, storage and network resources provisioned by virtualization manager 256a for different tenants. VIM 252a also exposes the functionality for managing the virtual compute, storage and network resources, e.g., as a set of APIs, to local control plane (LCP) 250a. LCP 250a is a physical or virtual appliance that receives the set of generic commands from multi-VIM adapter 220 and translates these commands into API calls that recognizable by VIM 252a.
Regional DC 102 depicted in
VIM 252b is a virtualized infrastructure management software executed in a physical or virtual server, that partitions the virtual compute, storage and network resources provisioned by virtualization manager 256b for different tenants. VIM 252b also exposes the functionality for managing the virtual compute, storage and network resources, e.g., as a set of APIs, to LCP 250b. LCP 250b is a physical or virtual appliance that receives the set of generic commands from multi-VIM adapter 220 and translates these commands into API calls that recognizable by VIM 252b.
Edge DC 103 depicted in
VIM 252c is a virtualized infrastructure management software executed in a physical or virtual server, that partitions the virtual compute, storage and network resources provisioned by virtualization manager 256c for different tenants. VIM 252c also exposes the functionality for managing the virtual compute, storage and network resources, e.g., as a set of APIs, to LCP 250c. LCP 250c is a physical or virtual appliance that receives the set of generic commands from multi-VIM adapter 220 and translates these commands into API calls that recognizable by VIM 252c.
LCPs 250a of the core data centers, LCPs 250b of the regional data centers, and LCPs 250c of the edge data centers in combination with multi-VIM adapter 220 implement the functionality of multi-site virtual infrastructure orchestration of network services. As a result of decentralizing the virtual infrastructure orchestration of network services, VNFs can be deployed across thousands or even tens of thousands of these data centers and even onto hardware mounted on radio towers, so that they can be located as close to the end users as possible.
As shown in
VIM 252c partitions the virtual compute, storage and network resources provisioned by virtualization manager 256c for different tenants. Inventory of virtual compute, storage and network resources for each of the tenants is maintained as cloud inventory 492 in a storage device of VIM 252c.
Cloud computing environment 470 is representative of a cloud computing environment for a particular tenant. In cloud computing environment 470, VMs 472 have been provisioned as virtual compute resources, virtual SAN 473 as a virtual storage resource, and virtual network 482 as a virtual network resource. Virtual network 482 is used to communicate between VMs 472 and is managed by at least one networking gateway component (e.g., gateway 484). Gateway 484 (e.g., executing as a virtual appliance) is configured to provide VMs 472 with connectivity to an external network (e.g., Internet). Gateway 484 manages external public IP addresses for cloud computing environment 470 and one or more private internal networks interconnecting VMs 472. Gateway 484 is configured to route traffic incoming to and outgoing from cloud computing environment 470 and provide networking services using VNFs for firewalls, network address translation (NAT), dynamic host configuration protocol (DHCP), and load balancing. Gateway 484 may be configured to provide virtual private network (VPN) connectivity over the external network with another VPN endpoint, such as orchestration server 201. Gateway 484 may be configured to provide Ethernet virtual private network (EVPN) connectivity over the external network so that it can communicate with multiple number of other data centers.
Local control planes of different cloud computing environments (e.g., LCP 250c of cloud computing environment 470) are configured to communicate with multi-VIM adapter 220 to enable multi-site virtual infrastructure orchestration of network services. LCP 250c (e.g., executing as a virtual appliance) may communicate with multi-VIM adapter 220 using Internet-based traffic via a VPN tunnel established between them, or alternatively, via a direct, dedicated link.
Upon receiving the request, central orchestrator 210 retrieves a corresponding network service descriptor from NS catalog 211. Central orchestrator 210 receives network service requests and for each request received, searches NS catalog 211 for a network service descriptor corresponding to the request and VNF catalog 212 for VNF descriptors of VNFs that are used by the requested network service. Upon completing a successful search, central orchestrator 210 retrieves the network service descriptor from NS catalog 211 and extracts information it needs to carry out the request.
For a request for a new network service, the information extracted from the network service descriptor includes identification information of all of the VNFs that are used by the network service, network connectivity requirements for the VNFs, CPU utilization and other factors related to performance of each virtual machine on which a VNF is to be deployed. Based on the extracted information, central orchestrator 210 issues a command to multi-VIM adapter 220 to create networks and subnets required by the new network service. In addition, for all the VNFs that are used by the network service, central orchestrator 210 parses the VNF descriptors to extract information relating to the VMs that need to be deployed to run the VNFs. Then, central orchestrator 210 issues commands to multi-VIM adapter 220 to create flavors for the VMs (i.e., to reserve resources for the VMs) and to create the VMs. Table 1 below provides examples of PO ST commands that are generated by central orchestrator 210 and issued to multi-VIM adapter 220. Multi-VIM adapter 220 translates these commands into a set of generic commands that it issues to LCPs of various, selected cloud computing data centers. These generic commands and parameters for these generic commands are shown in italics below each corresponding POST command.
LCP 250, upon receiving the set of generic commands, translates each of the generic commands into a set of VIM-specific API calls. In particular, microservices running inside LCP 250 translate the generic commands into calls made to APIs that are exposed by the underlying VIM.
Upon receiving the API calls that it recognizes, VIM 252 then makes calls to APIs exposed by the underlying NFVI, e.g., APIs exposed by virtualization manager 256. For example, in response to NFVI-specific API calls for instantiating VNFs, virtual machines in which VNFs are implemented and virtual disks for the virtual machines are instantiated. Then, virtualization manger 256 updates DC inventory 491 with IDs of the instantiated virtual machines and virtual disks and also returns the IDs of deployed virtual machines and virtual disks to VIM 252. VIM 252 in turn adds the IDs of the instantiated virtual machines and virtual disks into cloud inventory 492 and associates such IDs with the tenant for whom VIM 252 instantiated the virtual machines and virtual disks and the IDs of VNFs for which they have been deployed.
As discussed above, a major advantage provided by the embodiments over the prior art is scalability. Another advantage is in the handling of software upgrades, e.g., to virtual infrastructure management software. For example, in the prior art, if VIM 30 was upgraded, all of VNF managers 40, which are issuing VIM-specific API calls to VIM 30, will have to be modified to be compliant with the upgraded APIs of VIM 30. On the contrary, embodiments can support a rolling-type of upgrade, where all VIMs 252 of a particular type do not need to be upgraded at the same time. Therefore, if VIM 252a was OpenStack version 1.14, and VIM 252 of one hundred other data centers was OpenStack version 1.14, an upgrade to OpenStack version 1.15 can be carried out one VIM at a time, because an upgrade to a VIM of a particular data center will require the corresponding LCP 250 of that data center to be modified. Upgrades to VIMs of all the other data centers can be carried out at a later time, one VIM at a time.
The static inventory data, in the example of
The virtual resource requirements, CPU core, memory, storage, and network speed, are extracted from the descriptors of the VNFs. The column “Extension” represents an extension attribute of the corresponding VNF descriptor, which may specify, for example, that an SR-IOV NIC is required or a high IOPS storage is required. The extension attribute may be defined by the vendor of the VNF, by the network service customer, or generally by any entity that wants to specify custom placement constraints for the VNF.
In
At step 1014, central orchestrator 210 selects the next VNF to be processed through the loop, retrieves a descriptor of that VNF from VNF catalog 212, and extracts the requirements specified in the VNF descriptor. The requirements may specify the VNF type. If the VNF type is “edge,” the VNF is to be deployed in an edge data center. If the VNF type is “regional,” the VNF is to be deployed in a regional data center. If the VNF type is “core,” the VNF is to be deployed in a core data center. The requirements may also specify network connectivity requirements and minimum resource requirements.
At step 1016, central orchestrator 210 filters the data centers, which in a 5G network may number in the thousands or tens of thousands, based on two criteria. First, the filtering is done based on any location requirement for the network service to be deployed. The location requirement may have been specified, for example, in the network service request. So, if the location for the network service is a certain city, all data centers within zip codes that are not in that city will be filtered out. Second, the filtering is done based on the VNF type. If the VNF type is “edge,” regional and core data centers are filtered out. If the VNF type is “regional,” edge and core data centers are filtered out. If the VNF type is “core,” edge and regional data centers are filtered out.
At step 1018, central orchestrator 210 performs a further filtering based on static inventory and network connectivity requirements. A static inventory requirement may be for an SR-IOV NIC, a high IOPS storage, or a minimum memory or storage capacity. A network connectivity requirement may require a virtual network connection to another VNF specified in the network service descriptor. All data centers that cannot meet the static inventory requirement(s) and the network connectivity requirement(s) are filtered out.
At step 1020, central orchestrator 210 executes a matching algorithm based on current usage levels of the virtual resources in the data centers that remained after the filtering steps of 1016 and 1018 and the resource requirements specified in the VNF descriptor. Any well-known algorithm for possible candidates (in this example, data centers) against requirements (in this example, VNF requirements) may be employed. If there are no matches (step 1022, No), central orchestrator 210 at step 1024 returns an error in response to the network service request. If there are matches (step 1022, Yes), central orchestrator 210 at step 1026 selects the best matched data center and issues an intent to deploy the VNF to the best-matched data center.
When the intent to deploy to the VNF is issued to the best-matched data center, the best-matched data center responds synchronously to that request by sending updates to its inventory data maintained by central orchestrator 210. Central orchestrator 210 updates the inventory data for the best-matched data center and confirms if the best-matched data center is still able to meet the resource requirements of the VNF. If so (step 1028, Yes), central orchestrator 210 at step 1030 issues the command to deploy the VNF to the best-matched data center. If not (step 1028, No), central orchestrator executes step 1020 again to find another match.
After step 1030, the process returns to step 1012, at which central orchestrator 210 selects the next VNF to be deployed and the process described above after 1012 is repeated for that VNF.
It should be recognized that by having central orchestrator 210 maintain a state of the inventory of all the data centers locally, VNF placement decisions in connection with a network service deployment can be made immediately by comparing the requirements of the network service and the VNFs required by the network service and the state of the inventory maintained by central orchestrator 210. Polling of the data centers for their inventory state at the time the network service is requested may be practicable in prior generation networks when there are only a few dozen data centers. However, with 5G networks in which VNF placement decisions need to be made across thousands, tens of thousands, or more data centers, polling the data centers will result in considerable delays in the deployment of VNFs and ultimately the deployment of the requested network service. Accordingly, embodiments provide an efficient technique for deploying VNFs to support network services deployed in 5G and other future generation networks.
In addition, as the VNFs are deployed across the data centers, central orchestrator 210 tracks where the VNFs have been deployed. Central orchestrator 210 employs a data structure to track such correspondence, hereinafter referred to as a tracking data structure, and stores such tracking data structure in a fourth data store.
In addition, table 1100 is a tracking data structure for just one network service that has been deployed. In actual implementations, central orchestrator 210 maintains a separate tracking data structure for each network service that has been deployed. Accordingly, for any network service that has been deployed, central orchestrator 210 has a holistic view of where (e.g., which data centers) the VNFs for that network service have been deployed and is able to specify workflows to be executed across all such VNFs.
In one embodiment, a workflow that is to be executed across VNFs of a network service is defined in a recipe, which is stored in NS catalog 211, together with or separately from the descriptor of the network service. A simplified example of one such recipe is illustrated in
In particular, the “steps” defined in the recipe illustrated in
As illustrated in
The flow illustrated in
The flow begins at step S1 where, in response to a request to execute a workflow in the VNFs, central orchestrator 210 retrieves recipe 1301 corresponding to the workflow and begins carrying out the actions specified in the “steps” of recipe 1310. For the license example given above, central orchestrator 210 obtains a license key from a license server. At step S1, central orchestrator 210 also extracts relevant “bindings” from recipe 1301 (e.g., for each VNF listed in the “bindings” section, the ID of the VNF and a selection of the method by which the workflow script is to be executed in the VM implementing the VNF) and passes them down to multi-VIM adapter 220 along with workflow data, e.g., the license key.
At step S2, multi-VIM adapter 220 issues a separate workflow execution command for each of the VNFs. Each such command is issued to the data center having the data center ID corresponding to the VNF and includes a pointer to the workflow script to be executed, the ID of the VNF, the selection of the method by which the workflow script is to be executed in the VM implementing the VNF, and the workflow data.
Upon receipt of the workflow execution command from multi-IM adapter 220, LCP 250c passes it down to VIM 252c, which then executes step S3. In executing step S3, VIM 252c identifies the VM that implemented the VNF having the VNF ID, and passes down to virtualization manager 256c, the VM ID, the pointer to the workflow script, the selection of the method by which the workflow script is to be executed in the VM implementing the VNF, and the workflow data.
At step S4, operations manager 1341 of virtualization manager 256c communicates with operations agent 1324 running in hypervisor 1320 of host 1362 to execute the workflow in the VM having the VM ID using the selected SSH or REST API method. At step S5, when the “SSH script” method is selected, operations agent 1324 retrieves the workflow script using the pointer to the workflow script, injects the workflow script into the VM through the special backdoor channel by which hypervisor 1320 is able to control the VM, and instructs the VM to execute the workflow script. On the other hand, when the “REST API” method is selected, operations agent 1324 makes a REST API call that specifies the pointer to the workflow script to the VM.
At step S6, the VM executes the workflow script and returns an indication of success or failure to operations agent 1324. In turn, operations agent 1324 at step S7 returns the indication of success or failure along with the VM ID to operations manager 1341, which forwards the message to VIM 252c. At step S8, VIM 252c looks up the VNF ID corresponding to the VM ID and sends the indication of success or failure along with the VNF ID to LCP 250c, which forwards the message to multi-VIM adapter 220, when then forwards the message to central orchestrator 210. At step S9, central orchestrator 210 updates its inventory to indicate success or failure of the execution of the workflow for the VNF corresponding the VNF ID.
Other examples of workflows that may be executed in the virtual machines implementing the VNFs includes capacity operations, e.g., scale-out operations that are prompted by virtual machines that are consuming more than a threshold percentage of the CPU, healing operations performed on virtual machines implementing VNFs, that failed to respond to a health check, bootstrapping an SD-WAN VNF with information to connect to a management plane of the SD-WAN, applying patches to VNFs, backing up and restoring configuration settings of VNFs, running a test script in VNFs, and configuring VNFs for disaster recovery.
In addition, workflows that are executed in the virtual machines implementing the VNFs according to the same recipe may be carried out in parallel if there are no dependencies. In some situations, workflows may be run in two parts where the second part relies on results from the first part. In such situations, the responses from the virtual machines that have executed the first part are returned to central orchestrator 210 and then central orchestrator 210 issues additional commands through multi-VIM adapter 220 for one or more other virtual machines to execute the second part.
For example, when updating a license that has been granted for running a particular company's IMS VNF, central orchestrator 210 needs to know the MAC addresses of all UPF VNFs in radio towers that are connected to the IMS VNF. Accordingly, central orchestrator 210 executes the first part of the workflow to gather the MAC addresses of virtual machines that have implemented the UPF VNFs. Once all of the MAC addresses have been collected, central orchestrator 210 then pushes that information to the data center in which the virtual machine that implements the IMS VNF is running along with a workflow execution command to update the information about the granted license, in particular how many and which UPF VNFs are licensed.
Deployment of VNFs in the NFV platform described above involves configuring many VMs and executing numerous interfaces, which are workflows or scripts defined in a network service descriptor or VNF descriptors. Inputs for these interfaces involve runtime generated values, such as license keys, IP addresses, and MAC addresses. In the embodiments, references and dependencies are defined in the network service descriptor and the VNF descriptors such that runtime generated values are stored in a database so that they can be retrieved and populated back into required inputs for subsequent interfaces in an automated manner. As a result, during deployment or upgrade of a network service, the entire configuration, including the NFVs that are part of the network service, can be deployed or upgraded without manual intervention.
Three VNFs are defined in network service descriptor 1500, VNF-01, VNF-02, and VNF-03, and three requirements are specified for VNF-02 in section 1520. The first requirement is a requirement to store the output of the pre-instantiate workflow, SWITCH DEV ID, as EXT SWITCH DEV ID, because EXT SWITCH DEV ID is defined as an external reference in the VNF descriptor for VNF-02. The second requirement is a requirement to store VDU-IP-ADDRESS, when it is generated upon execution of the interfaces of VNF-01, as VNF-01-VDU-IP-ADDRESS, because VNF-01-VDU-IP-ADDRESS is defined as an external reference in the VNF descriptor for VNF-02. In the embodiments illustrated herein, the runtime values of the external references are stored in workflow database 214 when they are generated upon execution of the corresponding interfaces, and later retrieved as required inputs of interfaces defined in the VNF descriptor of VNF-02.
The third requirement specified in section 1520 is a dependency requirement. The listing of VNF-01 as a dependency requirement of VNF-02 signifies that VNF-02 depends on VNF-01 and, as a consequence, interfaces of VNF-01 are required to be executed prior to interfaces of VNF-02. This ensures that the external reference to VNF-01-VDU-IP-ADDRESS defined in the VNF descriptor of VNF-02 will be able to be resolved when the interfaces of VNF-02 execute.
In some embodiments, there may be one or more other network services defined within a network service descriptor. In such cases, an output of an interface of one network service may be referenced as a required input for an interface of another network service.
The method of
If, at step 1912, central orchestrator 210 determines that there is no input to the interface that is defined as a reference, internal or external, steps 1914 and 1916 are skipped, and step 1918 is executed in the manner described above.
Central orchestrator 210 executes step 1920 when the interface generates an output during execution and returns the value of the output to central orchestrator 210. At step 1920, central orchestrator 210 stores the returned values in workflow database 214 for later retrieval by subsequent interfaces that require them as inputs.
Embodiments enable the following use cases: (1) instantiating a VNF with auto-generated license key; (2) termination of VNFs; and (3) references across VNF and NS.
In the first use case, the user adds a requirement to the descriptor, which defines the interface that is executed to generate the license key, to store the license key in a database. The user also defines a reference to the stored license key in the descriptor of the VNF that is being instantiated. During execution, the generated license key is stored in the database and automatically retrieved when the VNF is instantiated.
In the second use case, the user adds a pre-termination interface and a post-termination interface to the descriptor of the VNF that is being terminated, and includes a reference to an output of the pre-termination interface in the input field of the post-termination interface.
In the third use case, external references are made across the network service and VNFs.
The various embodiments described herein may be practiced with other computer system configurations including hand-held devices, microprocessor systems, microprocessor-based or programmable consumer electronics, minicomputers, mainframe computers, and the like.
One or more embodiments of the present invention may be implemented as one or more computer programs or as one or more computer program modules embodied in one or more computer readable media. The term computer readable medium refers to any data storage device that can store data which can thereafter be input to a computer system. Computer readable media may be based on any existing or subsequently developed technology for embodying computer programs in a manner that enables them to be read by a computer. Examples of a computer readable medium include a hard drive, NAS, read-only memory (ROM), RAM (e.g., flash memory device), Compact Disk (e.g., CD-ROM, CD-R, or CD-RW), Digital Versatile Disk (DVD), magnetic tape, and other optical and non-optical data storage devices. The computer readable medium can also be distributed over a network coupled computer system so that the computer readable code is stored and executed in a distributed fashion.
Although one or more embodiments of the present invention have been described in some detail for clarity of understanding, it will be apparent that certain changes and modifications may be made within the scope of the claims. Accordingly, the described embodiments are to be considered as illustrative and not restrictive, and the scope of the claims is not to be limited to details given herein but may be modified within the scope and equivalents of the claims. In the claims, elements and/or steps do not imply any particular order of operation, unless explicitly stated in the claims.
Virtualization systems in accordance with the various embodiments may be implemented as hosted embodiments, non-hosted embodiments or as embodiments that tend to blur distinctions between the two, are all envisioned. Furthermore, various virtualization operations may be wholly or partially implemented in hardware. For example, a hardware implementation may employ a look-up table for modification of storage access requests to secure non-disk data.
Many variations, modifications, additions, and improvements are possible, regardless the degree of virtualization. The virtualization software can therefore include components of a host, console, or guest operating system that performs virtualization functions. Plural instances may be provided for components, operations or structures described herein as a single instance. Finally, boundaries between various components, operations and data stores are somewhat arbitrary, and particular operations are illustrated in the context of specific illustrative configurations. Other allocations of functionality are envisioned and may fall within the scope of the invention. In general, structures and functionalities presented as separate components in exemplary configurations may be implemented as a combined structure or component. Similarly, structures and functionalities presented as a single component may be implemented as separate components. These and other variations, modifications, additions, and improvements may fall within the scope of the appended claims.
Number | Date | Country | Kind |
---|---|---|---|
202141020239 | May 2021 | IN | national |