The present invention relates to a replace system and a replace method.
As an example of a technology relating to construction of each application such as a network function on an execution platform, in Patent Literature 1, there is described a technology for deconstructing an order of a product purchased by a customer into virtualized network function (VNF) units and deploying the VNF units on a network functions virtualization infrastructure (NFVI).
[Patent Literature 1] WO 2018/181826 A1
In such construction of each application as described in Patent Literature 1, when, for example, the population of an area covered by each application executed on an execution platform is diverse, a resource usage status varies depending on the execution platform, and there have been some cases in which resources of execution platforms cannot be utilized effectively.
The present invention has been made in view of the above-mentioned circumstances, and has an object to provide a replace system and a replace method which are capable of effectively utilizing resources of an execution platform on which an application is constructed.
In order to solve the above-mentioned problem, according to one embodiment of the present invention, there is provided a replace system including: construction means for constructing each of a plurality of applications on any one of a plurality of execution platforms; actual result value identification means for identifying, for each of the plurality of the execution platforms, an actual result value of a resource usage status on the each of the plurality of the execution platforms; leveling index value identification means for identifying, for each of a plurality of replacement patterns, based on the actual result value, a leveling index value indicating, in a case in which at least one of the applications has been replaced onto another one of the execution platforms, at least one of a degree of leveling of a resource usage status on the another one of the execution platforms of a replacement destination or a degree of leveling of resource usage statuses among the plurality of the execution platforms; replacement pattern determination means for determining, based on the leveling index value identified for each of the plurality of the replacement patterns, a replacement pattern relating to replacement to be executed; and replacement means for replacing at least one of the applications based on the determined replacement pattern.
In one aspect of the present invention, the replace system further includes selection means for selecting an application to be replaced from among the plurality of the applications, the leveling index value identification means is configured to identify the leveling index value indicating, in a case in which the application to be replaced has been replaced onto each of the execution platforms different from one of the execution platforms on which the application is being executed, at least one of the degree of leveling of the resource usage status on the each of the execution platforms of the replacement destination or the degree of leveling of the resource usage statuses among the plurality of the execution platforms, the replacement pattern determination means is configured to determine the each of the execution platforms of the replacement destination for the application to be replaced based on the leveling index value in the case in which the application to be replaced has been replaced onto the each of the execution platforms different from the one of the execution platforms on which the application is being executed, and the replacement means is configured to replace the application to be replaced onto the determined each of the execution platforms.
Further, in one aspect of the present invention, the leveling index value identification means is configured to identify the leveling index value based on a degree of improvement in the leveling of the resource usage status on the each of the execution platforms of the replacement destination and a degree of improvement in the leveling of the resource usage status on the one of the execution platforms of a replacement source.
Further, in one aspect of the present invention, the actual result value identification means is configured to identify a total sum of actual result values of resource usage statuses in respective applications executed on the each of the execution platforms as the actual result value of the resource usage status on the each of the execution platforms.
Further, in one aspect of the present invention, the replace system further includes predicted value identification means for identifying, for an addition-scheduled execution platform on which an application is to be added among the plurality of the execution platforms, a predicted value of the resource usage status on the addition-scheduled execution platform in a case in which the application has been constructed on the addition-scheduled execution platform, and the leveling index value identification means is configured to identify the leveling index value based on the identified predicted value.
In this aspect, the predicted value identification means may be configured to identify the predicted value based on the actual result value of the resource usage status in a running application of the same type as the type of the application to be added.
Further, in one aspect of the present invention, the leveling index value identification means is configured to identify the leveling index value based on predicted values of the resource usage statuses for respective period types.
In this aspect, the leveling index value identification means may be configured to identify the leveling index value based on, in the case in which at least one of the applications has been replaced onto another one of the execution platforms, the predicted values of the resource usage statuses for the respective period types on the another one of the execution platforms of the replacement destination.
As another example, the leveling index value identification means may be configured to identify the leveling index value indicating a variation in the predicted values for the respective period types, and the replacement pattern determination may be configured to determine the replacement pattern relating to the replacement to be executed based on smallness of the variation indicated by the leveling index value.
As another example, index value the leveling identification means may be configured to identify the leveling index value indicating a difference between a maximum value and a minimum value of the predicted values for the respective period types, and the replacement pattern determination means may be configured to determine the replacement pattern relating to the replacement to be executed based on smallness of the difference indicated by the leveling index value.
Further, in one aspect of the present invention, the leveling index value identification means is configured to identify the leveling index value indicating, in the case in which at least one of the applications has been replaced onto another one of the execution platforms, a variation in the predicted values of the resource usage statuses among the plurality of the execution platforms, and the replacement pattern determination means is configured to determine the replacement pattern relating to the replacement to be executed based on smallness of the variation indicated by the leveling index value.
Further, in one aspect of the present invention, the leveling index value identification means is configured to identify the leveling index value indicating, in the case in which at least one of the applications has been replaced onto another one of the execution platforms, a total sum of absolute values of differences between the predicted values of the resource usage statuses on the respective plurality of the execution platforms and a predetermined value, and the replacement pattern determination means is configured to determine, based on smallness of the total sum of the absolute values of the differences indicated by the leveling index value, the replacement pattern relating to the replacement to be executed.
Further, in one aspect of the present invention, each of the execution platforms is a Kubernetes cluster.
Further, in one aspect of the present invention, each of the applications is an application included in a communication System.
In this aspect, each of the applications may be a network function.
Further, in one aspect of the present invention, the resource usage status is at least one of a usage status of a CPU, a usage status of a memory, a usage status of a storage, a usage status of a network, or a usage status of electric power.
Further, according to one embodiment of the present invention, there is provided a replace method including the steps of: constructing each of a plurality of applications on any one of a plurality of execution platforms; identifying, for each of the plurality of the execution platforms, an actual result value of a resource usage status on the each of the plurality of the execution platforms; identifying, for each of a plurality of replacement patterns, based on the actual result value, a leveling index value indicating, in a case in which at least one of the applications has been replaced onto another one of the execution platforms, at least one of a degree of leveling of a resource usage status on the another one of the execution platforms of a replacement destination or a degree of leveling of resource usage statuses among the plurality of the execution platforms; determining, based on the leveling index value identified for each of the plurality of the replacement patterns, a replacement pattern relating to replacement to be executed; and replacing at least one of the applications based on the determined replacement pattern.
One embodiment of the present invention is now described in detail with reference to the drawings.
As illustrated in
For example, several central data centers 10 are dispersedly arranged in an area (for example, in Japan) covered by the communication system 1.
For example, tens of regional data centers 12 are dispersedly arranged in the area covered by the communication system 1. For example, when the area covered by the communication system 1 is the entire area of Japan, one or two regional data centers 12 may be arranged in each prefecture.
For example, thousands of edge data centers 14 are dispersedly arranged in the area covered by the communication system 1. In addition, each of the edge data centers 14 can communicate to/from a communication facility 18 provided with an antenna 16. In this case, as illustrated in
A plurality of servers are arranged in each of the central data centers 10, the regional data centers 12, and the edge data centers 14 in this embodiment.
In this embodiment, for example, the central data centers 10, the regional data centers 12, and the edge data centers 14 can communicate to/from one another. Communication can also be performed between the central data centers 10, between the regional data centers 12, and between the edge data centers 14.
As illustrated in
The RAN 32 is a computer system, which is provided with the antenna 16, and corresponds to an eNodeB (eNB) in a fourth generation mobile communication system (hereinafter referred to as “4G”) and an NR base station (gNB) in a fifth generation mobile communication system (hereinafter referred to as “5G”). The RANs 32 in this embodiment are implemented mainly by server groups arranged in the edge data centers 14 and the communication facilities 18. A part of the RAN 32 (for example, virtual distributed unit (vDU) or virtual central unit (vCU) in 4G or distributed unit (DU) or central unit (CU) in 5G) may be implemented by the central data center 10 or the regional data center 12 instead of the edge data center 14.
The core network system 34 is a system corresponding to an evolved packet core (EPC) in 4G or a 5G core (5GC) in 5G. The core network systems 34 in this embodiment are implemented mainly by server groups arranged in the central data centers 10 or the regional data centers 12.
The platform system 30 in this embodiment is configured, for example, on a cloud platform and includes a processor 30a, a storage unit 30b, and a communication unit 30c, as illustrated in
In this embodiment, the platform system 30 is implemented by a server group arranged in the central data center 10. The platform system 30 may be implemented by a server group arranged in the regional data center 12.
In this embodiment, for example, in response to a purchase request for a network service (NS) by a purchaser, the network service for which the purchase request has been made is constructed in the RAN 32 or the core network system 34. Then, the constructed network service is provided to the purchaser.
For example, a network service, such as a voice communication service, a data communication service, or the like, is provided to the purchaser who is a mobile virtual network operator (MVNO). The voice communication service or the data communication service provided in this embodiment is eventually provided to a customer (end user) for the purchaser (MVNO in the above-mentioned example), who uses the UE 20 illustrated in
In addition, in this embodiment, an IoT service may be provided to an end user who uses a robot arm, a connected car, or the like. In this case, an end user who uses, for example, a robot arm, a connected car, or the like may be a purchaser of the network service in this embodiment.
In this embodiment, a container-type application execution environment such as Docker is installed in the servers arranged in the central data center 10, the regional data center 12, and the edge data center 14, and containers can be deployed in those servers and operated. In those servers, a cluster (Kubernetes cluster) managed by a container management tool such as Kubernetes may be constructed. Then, a processor on the constructed cluster may execute a container-type application.
The network service provided to the purchaser in this embodiment is formed of one or a plurality of functional units (for example, network function (NF)). In this embodiment, the functional unit is implemented by a containerized network function (CNF) being a container-based functional unit. The functional unit in this embodiment may also correspond to a network node.
In this embodiment, for example, the network service illustrated in
In this embodiment, it is also assumed that the plurality of RUs 40, the plurality of DUs 42, the plurality of CUs 44, and the plurality of UPFs 46, which are illustrated in
As illustrated in
The NS corresponds to, for example, a network service formed of a plurality of NFs. In this case, the NS may correspond to an element having a granularity, such as a 5GC, an EPC, a 5G RAN (gNB), or a 4G RAN (eNB).
In 5G, the NF corresponds to an element having a granularity, such as the DU 42, the CU 44, or the UPF 46. The NF also corresponds to an element having a granularity, such as an AMF or an SMF. In 4G, the NF corresponds to an element having a granularity, such as a mobility management entity (MME), a home subscriber server (HSS), a serving gateway (S-GW), a vDU, or a vCU. In this embodiment, for example, one NS includes one or a plurality of NFS. That is, one or a plurality of NFs are under the control of one NS.
The CNFC corresponds to an element having a granularity, such as DU mgmt or DU processing. The CNFC may be a microservice deployed on a server as one or more containers. For example, some CNFCs may be microservices that provide a part of the functions of the DU 42, the CU 44, and the like. Some CNFCs may be microservices that provide a part of the functions of the UPF 46, the AMF, the SMF, and the like. In this embodiment, for example, one NF includes one or a plurality of CNFCs. That is, one or a plurality of CNFCs are under the control of one NF.
The pod refers to, for example, the minimum unit for managing a Docker container by Kubernetes. In this embodiment, for example, one CNFC includes one or a plurality of pods. That is, one or a plurality of pods are under the control of one CNFC.
In this embodiment, for example, one pod includes one or a plurality of containers. That is, one or a plurality of containers are under the control of one pod.
In addition, as illustrated in
The NSIs can be said to be end-to-end virtual circuits that span a plurality of domains (for example, from the RAN 32 to the core network system 34). Each NSI may be a slice for high-speed and high-capacity communication (for example, eMBB), a slice for high-reliability and low-latency communication (for example, URLLC), or a slice for connecting a large quantity of terminals (for example, mMTC). The NSSIs can be said to be single domain virtual circuits dividing an NSI. Each NSSI may be a slice of a RAN domain, a slice of a mobile back haul (MBH) domain, or a slice of a core network domain.
In this embodiment, for example, one NSI includes one or a plurality of NSSIs. That is, one or a plurality of NSSIs are under the control of one NSI. In this embodiment, a plurality of NSIs may share the same NSSI.
In addition, as illustrated in
In addition, in this embodiment, for example, one NF can belong to one or a plurality of network slices. Specifically, for example, network slice selection assistance information (NSSAI) including one or a plurality of pieces of sub-network slice selection assist information (S-NSSAI) can be set for one NF. In this case, the S-NSSAI is information associated with the network slice. The NF is not required to belong to the network slice.
As illustrated in
The above-mentioned functions may be implemented by executing, by the processor 30a, a program that is installed in the platform system 30, which is a computer, and that includes instructions corresponding to the above-mentioned functions. This program may be supplied to the platform system 30 via a computer-readable information storage medium, such as an optical disc, a magnetic disk, a magnetic tape, a magneto-optical disc, a flash memory, or the like, or via the Internet or the like. The above-mentioned functions may also be implemented by a circuit block, a memory, and other LSIs. Further, a person skilled in the art would understand that the above-mentioned functions can be implemented in various forms by only hardware, by only software, or by a combination of hardware and software.
In this embodiment, for example, the container management module 64 executes life cycle management of a container including the construction of the container, such as the deployment and setting of the container.
In this case, the platform system 30 in this embodiment may include a plurality of container management modules 64. In each of the plurality of container management modules 64, a container management tool such as Kubernetes, and a package manager such as Helm may be installed. Each of the plurality of container management modules 64 may execute the construction of a container such as the deployment of the container for a server group (Kubernetes cluster) associated with the container management module 64.
The container management module 64 is not required to be included in the platform system 30. The container management module 64 may be provided in, for example, a server (that is, the RAN 32 or the core network system 34) managed by the container management module 64, or a server that is annexed to the server managed by the container management module 64.
In this embodiment, the repository module 66 stores, for example, a container image of a container included in a functional unit group (for example, NF group) that implements a network service.
In this embodiment, the inventory database 70 is, for example, a database in which inventory information for a plurality of servers managed by the platform system 30 and arranged in the RAN 32 and the core network system 34 is stored.
In this embodiment, for example, the inventory database 70 stores inventory data including physical inventory data and logical inventory data. The inventory data indicates the current statuses of the configuration of an element group included in the communication system 1 and the link between the elements. In addition, the inventory data indicates the status of resources managed by the platform system 30 (for example, resource usage status).
The server ID included in the physical inventory data is, for example, an identifier of the server associated with the physical inventory data.
The location data included in the physical inventory data is, for example, data indicating the location of the server (for example, the address of the location) associated with the physical inventory data.
The building data included in the physical inventory data is, for example, data indicating a building (for example, a building name) in which the server associated with the physical inventory data is arranged.
The floor number data included in the physical inventory data is, for example, data indicating a floor number at which the server associated with the physical inventory data is arranged.
The rack data included in the physical inventory data is, for example, an identifier of a rack in which the server associated with the physical inventory data is arranged.
The specification data included in the physical inventory data is, for example, data indicating the specifications of the server, such as the number of cores, the memory capacity, and the hard disk capacity, of the server, associated with the physical inventory data.
The network data included in the physical inventory data is, for example, data indicating an NIC included in the server associated with the physical inventory data, the number of ports included in the NIC, and a port ID of each port, and the like.
The operating container ID list included in the physical inventory data is, for example, data indicating a list of identifiers (container IDs) of instances of one or a plurality of containers operating on the server associated with the physical inventory data.
The cluster ID included in the physical inventory data is, for example, an identifier of a cluster (for example, Kubernetes cluster) to which the server associated with the physical inventory data belongs.
In addition, the logical inventory data includes topology data for a plurality of elements included in the communication system 1, which indicates the current status of such link between the elements as illustrated in
The inventory data may also indicate the current status of, for example, a geographical relationship or a topological relationship between the elements included in the communication system 1. The above-mentioned inventory data includes location data indicating locations at which the elements included in the communication system 1 are operating, that is, the current locations of the elements included in the communication system 1. It can be said therefrom that the above-mentioned inventory data indicates the current status of the geographical relationship between the elements (for example, geographical closeness between the elements).
In addition, the logical inventory data may include NSI data being data indicating attributes, such as an identifier of an instance of a network slice and the type of the network slice. The logical inventory data may also include NSSI data being data indicating attributes, such as an identifier of an instance of a network slice subnet and the type of the network slice subnet.
In addition, the logical inventory data may include NS data being data indicating attributes, such as an identifier of an instance of an NS and the type of the NS. The logical inventory data may also include NF data indicating attributes, such as an identifier of an instance of an NF and the type of the NF. The logical inventory data may also include CNFC data indicating attributes, such as an identifier of an instance of a CNFC and the type of the CNFC. The logical inventory data may also include pod data indicating attributes, such as an identifier of an instance of a pod included in the CNFC and the type of the pod. The logical inventory data may also include container data indicating attributes, such as a container ID of an instance of a container included in the pod and the type of the container.
With the container ID of the container data included in the logical inventory data and the container ID included in the operating container ID list included in the physical inventory data, an instance of the container and the server on which the instance of the container is operating become linked to each other.
Further, data indicating various attributes, such as the host name and the IP address, may be set in the above-mentioned data included in the logical inventory data. For example, the container data may include data indicating the IP address of a container corresponding to the container data. Further, for example, the CNFC data may include data indicating the IP address and the host name of a CNFC indicated by the CNFC data.
The logical inventory data may also include data indicating NSSAI including one or a plurality of pieces of S-NSSAI, which is set for each NF.
Further, the inventory database 70 can appropriately grasp the resource status in cooperation with the container management module 64. Then, the inventory database 70 appropriately updates the inventory data stored in the inventory database 70 based on the latest resource status.
Further, for example, the inventory database 70 updates the inventory data stored in the inventory database 70 in accordance with execution of an action such as construction of a new element included in the communication system 1, a change of a configuration of the elements included in the communication system 1, scaling of the elements included in the communication system 1, or replacement of the elements included in the communication system 1.
In this embodiment, for example, the service catalog storage 54 stores service catalog data.
The service catalog data may include, for example, service template data indicating the logic to be used by the life cycle management module 84 or the like. This service template data includes information required for constructing the network service. Specifically, for example, the service template data includes information defining the NS, the NE, and the CNFC and information indicating an NS-NF-CNFC correspondence relationship. Further, for example, the service template data contains a workflow script for constructing the network service.
An NS descriptor (NSD) is an example of the service template data. The NSD is associated with a network service, and indicates, for example, the types of a plurality of functional units (for example, a plurality of CNFs) included in the network service. The NSD may indicate the number of CNFs or other functional units included in the network service for each type thereof. The NSD may also indicate a file name of a CNFD described later, which relates to the CNF included in the network service.
Further, a CNF descriptor (CNFD) is an example of the above-mentioned service template data. The CNFD may indicate computer resources (CPU, memory, hard disk drive, and the like) required by the CNF. For example, the CNFD may also indicate, for each of a plurality of containers included in the CNF, computer resources (CPU, memory, hard disk drive, and the like) required by the container.
The service catalog data may also include information to be used by the policy manager module 30, the information relating to a threshold value (for example, threshold value for abnormality detection) to be compared to the calculated performance index value.
The service catalog data may also include, for example, slice template data indicating the logic to be used by the slice manager module 82. The slice template data includes information required for executing instantiation of the network slice.
The slice template data includes information on a “generic network slice template” defined by the GSM Association (GSMA) (“GSM” is a trademark). Specifically, the slice template data includes network slice template data (NST), network slice subnet template data (NSST), and network service template data. The slice template data also includes information indicating the hierarchical structure of those elements which is illustrated in
In this embodiment, for example, the life cycle management module 84 constructs a new network service for which a purchase request has been made in response to the purchase request for the NS by the purchaser.
The life cycle management module 84 may execute, for example, the workflow script associated with the network service to be purchased in response to the purchase request. Then, the life cycle management module 84 may execute this workflow script, to thereby instruct the container management module 64 to deploy the container included in the new network service to be purchased. Then, the container management module 64 may acquire the container image of the container from the repository module 66 and deploy a container corresponding to the container image in the server.
In addition, in this embodiment, the life cycle management module 84 executes, for example, scaling or replacement of the element included in the communication system 1. In this case, the life cycle management module 84 may output a container deployment instruction or deletion instruction to the container management module 64. Then, the container management module 64 may execute, for example, a process for deploying a container or a process for deleting a container in accordance with the instruction. In this embodiment, the life cycle management module 84 can execute such scaling and replacement that cannot be handled by Kubernetes of the container management module 64.
The life cycle management module 84 may also output, to the configuration management module 62, a configuration management instruction for a newly constructed element group or an existing element into which a new setting is input. Then, the configuration management module 62 may execute configuration management such as settings in accordance with the configuration management instruction.
The life cycle management module 84 may also output, to the SDN controller 60, an instruction to create a communication route between two IP addresses linked to the two IP addresses.
In this embodiment, the slice manager module 82 executes, for example, instantiation of a network slice. In this embodiment, the slice manager module 82 executes, for example, instantiation of a network slice by executing the logic indicated by the slice template stored in the service catalog storage 54.
The slice manager module 82 includes, for example, a network slice management function (NSMF) and a network slice sub-network management function (NSSMF) described in the 3GPP (trademark) specification “TS28 533.” The NSMF is a function for generating and managing network slices, and provides NSI management. The NSSMF is a function for generating and managing network slice subnets forming a part of a network slice, and provides NSSI management.
The slice manager module 82 may output, to the configuration management module 62, a configuration management instruction related to the instantiation of the network slice. Then, the configuration management module 62 may execute configuration management such as settings in accordance with the configuration management instruction.
The slice manager module 82 may also output, to the SDN controller 60, an instruction to create a communication route between two IP addresses linked to the two IP addresses.
In this embodiment, for example, the configuration management module 62 executes configuration management such as settings of the element group including the NFs in accordance with the configuration management instruction received from the life cycle management module 84 or the slice manager module 82.
In this embodiment, for example, the SDN controller 60 creates the communication route between the two IP addresses linked to the creation instruction in accordance with the instruction to create the communication route, which has been received from the life cycle management module 84 or the slice manager module 82.
In this case, for example, the SDN controller 60 may use segment routing technology (for example, segment routing IPV6 (SRv6)) to construct an NSI and NSSI for the server or an aggregation router present between communication routes. The SDN controller 60 may also generate an NSI and NSSI extending over a plurality of NFs to be set by issuing, to the plurality of NFs to be set, a command to set a common virtual local area network (VLAN) and a command to assign a bandwidth and a priority indicated by the setting information to the VLAN.
The SDN controller 60 may change the upper limit of the bandwidth that can be used for communication between two IP addresses without constructing a network slice.
In this embodiment, the monitoring function module 58 monitors, for example, the element group included in the communication system 1 based on a given management policy. In this case, for example, the monitoring function module 58 may monitor the element group based on a monitoring policy designated by the purchaser when the purchaser purchases the network service.
In this embodiment, the monitoring function module 58 executes monitoring at various levels, such as a slice level, an NS level, an NF level, a CNFC level, and a level of hardware such as the server.
For example, the monitoring function module 58 may set a module for outputting metric data in the hardware such as the server, or a software element included in the communication system 1 so that monitoring can be performed at the various levels described above. In this case, for example, the NF may output the metric data indicating a metric that can be measured (can be identified) by the NF to the monitoring function module 58. Further, the server may output the metric data indicating a metric relating to the hardware that can be measured (can be identified) by the server to the monitoring function module 58.
In addition, for example, the monitoring function module 58 may deploy, in the server, a sidecar container for aggregating the metric data indicating the metrics output from a plurality of containers in units of CNFCs (microservices). This sidecar container may include an agent called “exporter.” The monitoring function module 58 may repeatedly execute a process for acquiring the metric data aggregated in units of microservices from the sidecar container, at predetermined monitoring intervals through use of a mechanism of Prometheus.
The monitoring function module 58 may monitor performance index values regarding performance indices described in, for example, “TS 28.552, Management and orchestration; 5G performance measurements” of “TS 28.554, Management and orchestration; 5G end to end Key Performance Indicators (KPI).” Then, the monitoring function module 58 may acquire metric data indicating the performance index values to be monitored.
Then, for example, when the monitoring function module 58 acquires the above-mentioned metric data, the monitoring function module 58 outputs the metric data to the AI/big data processing module 56.
Further, the elements, such as the network slice, the NS, the NF, the CNFC, that are included in the communication system 1 and the hardware such as the server notify the monitoring function module 58 of various alerts (for example, notify the monitoring function module 58 of an alert with the occurrence of a failure as a trigger).
Then, for example, when the monitoring function module 58 receives the above-mentioned notification of the alert, the monitoring function module 58 outputs the notification to the AI/big-data processing module 56.
In this embodiment, the AI/big-data processing module 56 accumulates, for example, pieces of metric data and notifications of the alerts that have been output from the monitoring function module 58. In addition, in this embodiment, for example, the AI/big-data processing module 56 stores in advance a trained machine learning model.
Then, in this embodiment, for example, the AI/big-data processing module 56 executes, based on the accumulated pieces of metric data and the above-mentioned machine learning model, an estimation process such as a future prediction process for a use status and quality of service of the communication system 1. The AI/big-data processing module 56 may generate estimation result data indicating results of the estimation process.
In this embodiment, for example, the performance management module 76 calculates, based on a plurality of pieces of metric data, a performance index value (for example, KPI) that is based on metrics indicated by those pieces of metric data. The performance management module 76 may calculate a performance index value (for example, performance index value relating to an end-to-end network slice) which is a comprehensive evaluation of a plurality of types of metrics and cannot be calculated from a single piece of metric data. The performance management module 76 may generate comprehensive performance index value data indicating a performance index value being a comprehensive evaluation.
The performance management module 76 may acquire the metric data from the monitoring function module 58 through intermediation of the AI/big-data processing module 56 as illustrated in
In this embodiment, the failure management module 74 detects the occurrence of a failure in the communication system 1 based on, for example, at least any one of the above-mentioned metric data, the above-mentioned notification of the alert, the above-mentioned estimation result data, or the above-mentioned comprehensive performance index value data. The failure management module 74 may detect, for example, the occurrence of a failure that cannot be detected from a single piece of metric data or a single notification of the alert, based on a predetermined logic. The failure management module 74 may also generate detection failure data indicating the detected failure.
The failure management module 74 may acquire the metric data and the notification of the alert directly from the monitoring function module 58 or through intermediation of the AI/big-data processing module 56 and the performance management module 76. The failure management module 74 may also acquire the estimation result data directly from the AI/big-data processing module 56 or through intermediation of the performance management module 76.
In this embodiment, the policy manager module 80 executes a predetermined determination process based on, for example, at least any one of the above-mentioned metric data, the above-mentioned notification of the alert, the above-mentioned estimation result data, the above-mentioned comprehensive performance index value data, or the above-mentioned detection failure data.
Then, the policy manager module 80 may execute an action corresponding to a result of the determination process. For example, the policy manager module 80 may output an instruction to construct a network slice to the slice manager module 82. The policy manager module 80 may also output an instruction for scaling or replacement of the elements to the life cycle management module 84 based on the result of the determination process.
In this embodiment, the ticket management module 72 generates, for example, a ticket indicating information to be notified to an administrator of the communication system 1. The ticket management module 72 may generate a ticket indicating details of the detection failure data. The ticket management module 72 may also generate a ticket indicating a value of the performance index value data or the metric data. The ticket management module 72 may also generate a ticket indicating a determination result obtained by the policy manager module 80.
Then, the ticket management module 72 notifies the administrator of the communication system 1 of the generated ticket. The ticket management module 72 may send, for example, an email to which the generated ticket is attached to an email address of the administrator of the communication system 1.
A process relating to addition of each application such as an NF to an execution platform (for example, Kubernetes cluster or server) is further described in the following.
In this embodiment, for example, the monitoring function module 58 monitors a resource usage status on each of a plurality of execution platforms (for example, Kubernetes clusters and servers) included in the communication system 1. Further, in this embodiment, for example, the monitoring function module 58 monitors a resource usage status in each of applications being executed on the execution platforms.
In this case, the resource usage status to be monitored may be at least one of a usage status of a CPU, a usage status of a memory, a usage status of a storage, a usage status of a network, or a usage status of electric power.
Examples of the usage status of the CPU include a CPU usage rate. Examples of the usage status of the memory include a memory usage amount and a memory usage rate. Examples of the usage status of the storage include a storage usage amount and a storage usage rate. Examples of the usage status of the network include a bandwidth usage amount and a bandwidth usage rate. Examples of the usage status of the electric power include a power consumption amount.
Then, the monitoring function module 58 outputs pieces of metric data indicating results of the monitoring to the AI/big-data processing module 56. In this manner, the pieces of metric data are accumulated in the AI/big-data processing module 56.
In this case, it is assumed that, in this embodiment, it has been determined to add an application to the communication system 1. For example, it is assumed that it has been determined to execute a process such as construction of a new NF triggered by the purchase of an NS or scale-out of an NF triggered by an increase in load.
An application determined to be added to the communication system 1, that is, an application to be added to the communication system 1, is hereinafter referred to as “to-be-added application.”
When it has been determined to add the to-be-added application to the communication system 1, the AI/big-data processing module 56 identifies an execution platform on which the to-be-added application is executable from among a plurality of execution platforms included in the communication system 1.
In this case, in this embodiment, for each type of application, a requirement relating to the execution platform on which the application of the type is executable may be determined in advance.
The requirement may be, for example, a requirement relating to hardware (hereinafter referred to as “hardware requirement”). Examples of the hardware requirement may include that single root I/O virtualization (SRIOV) has been implemented, that a graphics processing unit (GPU) has been installed, and that a field-programmable gate array (FPGA) has been installed. Examples of the hardware requirement may also include that the number of mounted GPUs is equal to or larger than a predetermined number, that the size of a mounted memory is equal to or larger than a predetermined size, and that the size of a mounted storage is equal to or larger than a predetermined size.
The requirement may also be, for example, a requirement relating to a location at which the execution platform is arranged.
The above-mentioned requirements may also be described in, for example, the CNFD stored in the service catalog storage 54. Then, the AI/big-data processing module 56 may identify the above-mentioned requirement by referring to the CNFD stored in the service catalog storage 54.
Then, the AI/big-data processing module 56 may identify the execution platform on which the to-be-added application is executable from among the plurality of execution platforms included in the communication system 1 based on the above-mentioned requirement associated with the type of the to-be-added application. The execution platform identified as the execution platform on which the to-be-added application is executable is hereinafter referred to as “candidate platform.”
Then, the AI/big-data processing module 56 extracts, for example, pieces of metric data indicating the resource usage status in the last period for each of a plurality of candidate platforms. For example, the AI/big-data processing module 56 may extract pieces of metric data indicating the resource usage status collected during the last predetermined length of time (for example, one month).
Then, in this embodiment, for example, the AI/big-data processing module 56 identifies, for each of the plurality of candidate platforms, an actual result value of the resource usage status on the candidate platform. For example, in this embodiment, the AI/big-data processing module 56 identifies, for each of the plurality of candidate platforms, an actual result value of the resource usage status on the candidate platform based on the pieces of metric data extracted for the candidate platform. As described above, the usage status having the actual result value to be identified may be, for example, at least one of the usage status of the CPU, the usage status of the memory, the usage status of the storage, the usage status of the network, or the usage status of the electric power.
In this case, the AI/big-data processing module 56 may extract pieces of metric data indicating the resource usage status for each of the applications being executed on the candidate platform. Then, the AI/big-data processing module 56 may identify a total sum of the actual result values of the resource usage statuses in the respective applications executed on the candidate platform, which are indicated by the extracted pieces of metric data, as the actual result value of the resource usage status on the candidate platform.
The AI/big-data processing module 56 may also generate, based on the extracted pieces of metric data, usage status actual-result-value data indicating the actual result value of the resource usage status on the candidate platform, which is exemplified in
The usage status actual-result-value data in this embodiment may include a plurality of separate pieces of actual-result-value data indicating usage statuses of resources of mutually different types. In the example of
The separate pieces of actual-result-value data may also include a plurality of pieces of period actual-result-value data respectively associated with the period types. In this embodiment, for example, as illustrated in
For example, the piece of metric data indicating the usage status of the CPU regarding the time slot of 0:00 to 3:00 on a weekday may be identified from among the pieces of metric data indicating the resource usage status in the last one month. Then, the value of the piece of period actual-result-value data included in the CPU actual-result-value data in association with 0:00 to 3:00 on a weekday may be determined based on the identified piece of metric data. In this case, for example, a representative value al, such as an average value or a maximum value of the CPU usage rate indicated by the identified piece of metric data, may be set as the value of the piece of period actual-result-value data included in the CPU actual-result-value data in association with 0:00 to 3:00 on a weekday.
In the same manner, the values of the other pieces of period actual-result-value data included in the CPU actual-result-value data are also determined, to thereby generate the CPU actual-result-value data illustrated in
Further, in the same manner, the memory actual-result-value data is generated based on the pieces of metric data indicating the usage status of the memory in the last one month. Further, the storage actual-result-value data is generated based on the pieces of metric data indicating the usage status of the storage in the last one month. Further, the network actual-result-value data is generated based on the pieces of metric data indicating the usage status of the network in the last one month. Further, the power consumption actual-result-value data is generated based on the pieces of metric data indicating the usage status of the electric power in the last one month.
Then, in this embodiment, for example, the AI/big-data processing module 56 identifies, for each of the plurality of candidate platforms, a predicted value of the resource usage status on the execution platform obtained in a case in which the to-be-added application has been constructed on the candidate platform based on the above-mentioned actual result value relating to the candidate platform. The AI/big-data processing module 56 may identify the above-mentioned predicted value for each period type.
In this case, for example, the resource amount required by the to-be-added application may be defined in advance. For example, the resource amount required by the to-be-added application may be described in the above-mentioned CNFD. Further, the AI/big-data processing module 56 may refer to the CNFD stored in the service catalog storage 54 to identify the resource amount required by the to-be-added application.
In this case, for example, the required resource amount may be identified for each of the CPU, the memory, the storage, the network, and the power consumption. The required resource amount may also be identified for each of the above-mentioned plurality of period types.
In this case, the AI/big-data processing module 56 may generate AP resource data indicating a resource amount associated with the application, which is exemplified in
The AP resource data in this embodiment may include a plurality of separate pieces of AP resource data indicating resource amounts regarding resources of mutually different types. In the example of
The separate pieces of AP resource data may also include a plurality of pieces of period AP resource data respectively associated with the period types. In this embodiment, for example, the separate pieces of AP resource data include the pieces of period AP resource data regarding the same 16 period types as those of the pieces of period actual-result-value data included in the usage status actual-result-value data illustrated in
Then, the AI/big-data processing module 56 may identify, for each of the plurality of candidate platforms, the predicted value of the resource usage status on the candidate platform obtained in the case in which the to-be-added application has been constructed on the candidate platform based on the above-mentioned usage status actual-result-value data that are associated with the candidate platform and the above-mentioned
AP resource data. The AI/big-data processing module 56 may also generate usage status predicted-value data indicating the predicted value identified in this manner, which is exemplified in
The usage status predicted-value data in this embodiment may include a plurality of separate pieces of predicted-value data indicating usage statuses of resources of mutually different types. In the example of
In addition, the separate piece of predicted-value data may include a plurality of pieces of period predicted-value data respectively associated with the period types. In this embodiment, for example, as illustrated in
For example, a value c1 obtained by adding a value b1 of the CPU AP resource data regarding the time slot of 0:00 to 3:00 on a weekday, which is illustrated in
In the same manner, the values of the other pieces of period predicted-value data included in the CPU predicted-value data are also determined, to thereby generate the CPU predicted-value data, which is illustrated in
In the same manner as well, the memory predicted-value data is generated based on the memory AP resource data and the memory actual-result-value data. In addition, the storage predicted-value data is generated based on the storage AP resource data and the storage actual-result-value data. Further, the network predicted-value data is generated based on the network AP resource data and the network actual-result-value data. Still further, the power consumption predicted-value data is generated based on the power consumption AP resource data and the power consumption actual-result-value data.
The resource amount required by the to-be-added application may be determined based on the type of the execution platform (for example, scale or specifications of the execution platform). Then, the AP resource data may be generated for each type of the execution platform. Then, the usage status predicted-value data on the candidate platform may be generated based on the usage status actual-result-value data on the candidate platform and the AP resource data generated based on the type of the candidate platform.
A method of generating the usage status predicted-value data is not limited to the above-mentioned method.
For example, the AI/big-data processing module 56 may identify the predicted value of the resource usage status on the candidate platform obtained in the case in which the to-be-added application has been constructed on the candidate platform based on the actual result value of the resource usage status in a running application of the same type as that of the to-be-added application.
For example, the AI/big-data processing module 56 may store a trained machine learning model that has learned a correspondence between the actual result value of the resource usage status before addition of an application of the same type as that of the to-be-added application and the actual result value of the resource usage status after the addition of the application on an execution platform on which the application is operating. For example, this trained machine learning model may output the usage status predicted-value data in response to the input of the usage status actual-result-value data and data indicating the type of the to-be-added application.
Then, the AI/big-data processing module 56 may generate the usage status predicted-value data by inputting the usage status actual-result-value data and the data indicating the type of the to-be-added application to this trained machine learning model.
In this case, the trained machine learning model in this embodiment may be a conservative model that has learned teacher data regarding the execution platform exhibiting a conspicuous difference between the resource usage statuses before and after the addition of the application. For example, for each period type, the execution platform exhibiting the largest difference between the resource usage statuses in the period type between before and after the addition of the application may be identified. Then, the AI/big-data processing module 56 may cause the machine learning model to learn, for each period type, the correspondence between the actual result value of the resource usage status before the addition of the application and the actual result value of the resource usage status after the addition of the application regarding the execution platform identified in this manner. Then, the usage status predicted-value data may be generated through use of the trained machine learning model subjected to the learning for each period type in this manner.
In another case, the usage status predicted-value data may be generated based on a given calculation expression or correspondence rule indicating a relationship between the actual result value and predicted value of the usage status and the usage status actual-result-value data.
In the examples of
Further, in this embodiment, for example, for each of the plurality of candidate platforms, the AI/big-data processing module 56 identifies, based on the predicted values identified as described above, the leveling index value indicating, in a case in which the to-be-added application has been constructed on the candidate platform, at least one of a degree of leveling of the resource usage status on the candidate platform or a degree of leveling of the resource usage statuses among the plurality of candidate platforms.
In this case, the AI/big-data processing module 56 may identify the leveling index value based on the above-mentioned predicted value for each period type.
For example, the AI/big-data processing module 56 may identify a leveling index value indicating, in the case in which the to-be-added application has been constructed on the candidate platform, a variation in the predicted values for the respective period types on the candidate platform.
For example, for each of the plurality of candidate platforms, the AI/big-data processing module 56 may identify, for each of the five separate pieces of predicted-value data associated with the candidate platform, a variance in the values of the 16 pieces of period predicted-value data included in the separate pieces of predicted-value data.
Then, the AI/big-data processing module 56 may identify, as the leveling index value associated with the candidate platform, a weighted linear sum of the variances identified for the five separate pieces of predicted-value data provided with a given weight. The leveling index value may be identified based on a standard deviation instead of the variance.
The leveling index value identified in this manner corresponds to an example of the leveling index value indicating, in the case in which the to-be-added application has been constructed on the candidate platform, the degree of leveling of the resource usage status on the candidate platform.
The AI/big-data processing module 56 may also identify, for example, the leveling index value indicating, in the case in which the to-be-added application has been constructed on the candidate platform, a difference between a maximum value and a minimum value of the predicted values for the respective period types on the candidate platform.
For example, for each of the plurality of candidate platforms, the AI/big-data processing module 56 may identify, for each of the five separate pieces of predicted-value data associated with the candidate platform, a difference between a maximum value and a minimum value among the values of the 16 pieces of period predicted-value data included in the separate pieces of predicted-value data.
Then, the AI/big-data processing module 56 may identify, as the leveling index value associated with the candidate platform, a weighted linear sum of the differences identified for the five separate pieces of predicted-value data provided with a given weight.
The leveling index value identified in this manner corresponds to an example of the leveling index value indicating, in the case in which the to-be-added application has been constructed on the candidate platform, the degree of leveling of the resource usage status on the candidate platform.
Further, the AI/big-data processing module 56 may identify the leveling index value indicating, in the case in which the to-be-added application has been arranged on the candidate platform, the variation in the predicted values of the resource usage statuses among the plurality of candidate platforms.
In another case, for each of the plurality of candidate platforms, the AI/big-data processing module 56 may identify the leveling index value indicating, in the case in which the to-be-added application has been arranged on the candidate platform, a total sum of absolute values of differences between the predicted values of the resource usage statuses on the respective plurality of candidate platforms and a predetermined value.
In those cases, in identifying the leveling index value obtained in a case in which the to-be-added application has been arranged on a certain candidate platform, the value of the usage status predicted-value data corresponds to the predicted value of the resource usage status for the certain candidate platform, and for other candidate platforms, the values of the usage status actual-result-value data correspond to the predicted values of the resource usage statuses.
For example, it is assumed that the number of candidate platforms is “n”. In this case, for example, for each of the “n” candidate platforms, the AI/big-data processing module 56 may calculate, based on the usage status actual-result-value data associated with the candidate platform, an actual-result resource usage rate associated with the candidate platform. For example, a representative value of the actual result value of the CPU usage rate, a representative value of the actual result value of the memory usage rate, a representative value of the actual result value of the storage usage rate, and a representative value of the actual result value of the bandwidth usage rate may be calculated for each of a plurality of period types. Then, a representative value regarding those four representative values may be calculated as the actual-result resource usage rate associated with the candidate platform.
Further, for example, for each of the “n” candidate platforms, the AI/big-data processing module 56 may calculate, based on the usage status predicted-value data associated with the candidate platform, a predicted resource usage rate associated with the candidate platform. For example, a representative value of the predicted value of the CPU usage rate, a representative value of the predicted value of the memory usage rate, a representative value of the predicted value of the storage usage rate, and a representative value of the predicted value of the bandwidth usage rate may be calculated for each of a plurality of period types. Then, a representative value regarding those four representative values may be calculated as the predicted resource usage rate associated with the candidate platform.
Examples of the above-mentioned representative values include an average value and a maximum value. In addition, examples of the above-mentioned “representative value regarding four representative values” include an average value of the above-mentioned four average values, a maximum value of the above-mentioned four average values, an average value of the above-mentioned four maximum values, and a maximum value of the above-mentioned four maximum values.
Then, the AI/big-data processing module 56 may calculate the leveling index value associated with the candidate platform for each of the “n” candidate platforms based on the above-mentioned actual-value resource usage rate and the above-mentioned predicted resource usage rate.
For example, the value of the predicted resource usage rate of a focused platform being a certain candidate platform and the values of the actual-value resource usage rates of the other (n−1) candidate platforms may be identified. Then, the variance or standard deviation of the identified “n” values may be calculated as the leveling index value associated with the focused platform.
In another case, a difference between the value of the predicted resource usage rate regarding the focused platform and a predetermined value (for example, 70%) and a difference between the value of the actual-value resource usage rates regarding the other (n−1) candidate platforms and the predetermined value (for example, 70%) may be identified. Then, a total sum of absolute values of the identified “n” differences may be calculated as the leveling index value associated with the focused platform.
The leveling index value identified in this manner corresponds to an example of the leveling index value indicating the degree of leveling of the resource usage statuses among the plurality of candidate platforms.
Further, in this embodiment, the leveling index value associated with the candidate platform may be identified based on, in the case in which the to-be-added application has been constructed on the candidate platform, a first leveling index value indicating the degree of leveling of the resource usage status on the candidate platform and a second leveling index value indicating the degree of leveling of the resource usage statuses among the plurality of candidate platforms.
For example, an average value of the first leveling index value and the second leveling index value may be identified as the leveling index value associated with the candidate platform. In another case, for example, a weighted average value of the first leveling index value and the second leveling index value based on a given weight may be identified as the leveling index value associated with the candidate platform.
Then, the policy manager module 80 determines the execution platform on which the to-be-added application is to be constructed from among the plurality of candidate platforms based on the leveling index value identified as described above. For example, when the leveling index value indicating, in the case in which the to-be-added application has been constructed on the candidate platform, the variation in the predicted values for the respective period types on the candidate platform is identified, the execution platform on which the to-be-added application is to be constructed may be determined based on smallness of the variation indicated by the leveling index value.
Further, for example, when the leveling index value indicating, in the case in which the to-be-added application has been arranged on the candidate platform, the variation in the predicted values of the resource usage statuses among the plurality of candidate platforms is identified, the candidate platform on which the to-be-added application is to be constructed may be determined based on the smallness of the variation indicated by the leveling index value.
For example, the candidate platform associated with the smallest value among the leveling index values indicating the variations respectively associated with the plurality of candidate platforms may be determined as the execution platform on which the to-be-added application is to be constructed.
Further, for example, when the leveling index value indicating, in the case in which the to-be-added application has been constructed on the candidate platform, the difference between the maximum value and the minimum value of the predicted values for the respective period types on the candidate platform is identified, the execution platform on which the to-be-added application is to be constructed may be determined based on smallness of the difference indicated by the leveling index value. For example, the candidate platform associated with the smallest value among the leveling index values indicating the differences respectively associated with the plurality of candidate platforms may be determined as the execution platform on which the to-be-added application is to be constructed.
Further, for example, when the leveling index value indicating, in the case in which the to-be-added application has been arranged on the candidate platform, the total sum of the absolute values of the differences between the predicted values of the resource usage statuses on the respective plurality of candidate platforms and the predetermined value is identified, the execution platform on which the to-be-added application is to be constructed may be determined based on smallness of the total sum of the absolute values of the differences indicated by the leveling index values. For example, the candidate platform associated with the smallest value among the leveling index values indicating the total sum of the absolute values of the differences respectively associated with the plurality of candidate platforms may be determined as the execution platform on which the to-be-added application is to be constructed.
Then, the life cycle management module 84, the container management module 64, and the configuration management module 62 construct the to-be-added application on the execution platform determined as the execution platform on which the to-be-added application is to be constructed as described above.
The leveling index value described above becomes smaller as the degree of leveling becomes higher. Such a leveling index value as to become larger as the degree of leveling becomes higher may be used instead. In this case, the candidate platform associated with the largest value among the leveling index values respectively associated with the plurality of candidate platforms is determined as the execution platform on which the to-be-added application is to be constructed.
For example, when each application is constructed on the execution platform in a round robin, there are some cases in which the resources of the execution platforms cannot be utilized effectively. When, for example, the population of an area covered by each application executed on an execution platform is diverse, the resource usage status may vary depending on the execution platform.
In this embodiment, the execution platform on which the to-be-added application is to be constructed is determined based on the above-mentioned leveling index value, thereby enabling the resources of the execution platforms to be utilized effectively.
The leveling index value in this embodiment may indicate a degree of improvement in the leveling of the resource usage statuses before and after the addition of the to-be-added application.
Further, the candidate platform associated with the predicted resource usage rate exceeding a predetermined threshold value may be excluded from the execution platforms on which the to-be-added application is to be constructed. That is, the execution platform on which the to-be-added application is to be constructed may be determined from among the candidate platforms associated with the predicted resource usage rate that does not exceed the predetermined threshold.
[Leveling of Resource Usage Statuses through Replacement]
The leveling of resource usage statuses on execution platforms (for example, Kubernetes clusters and servers) through replacement of an application being executed on the communication system 1 is further described in the following.
As described above, in this embodiment, for example, the life cycle management module 84, the container management module 64, and the configuration management module 62 construct each of the plurality of applications on any one of the plurality of execution platforms included in the communication system 1.
Then, as described above, the monitoring function module 58 monitors a resource usage status on each of a plurality of execution platforms (for example, Kubernetes clusters and servers) included in the communication system 1. Further, the monitoring function module 58 monitors a resource usage status in each of applications being executed on the execution platforms.
As described above, the resource usage status to be monitored may be at least one of a usage status of a CPU, a usage status of a memory, a usage status of a storage, a usage status of a network, or a usage status of electric power.
Then, as described above, the pieces of metric data indicating the monitoring results are accumulated in the AI/big-data processing module 56.
Then, in this embodiment, for example, when a leveling execution timing determined in advance has arrived, the AI/big-data processing module 56 extracts the pieces of metric data indicating the last resource usage status for each of the plurality of execution platforms included in the communication system 1. For example, the AI/big-data processing module 56 extracts the pieces of metric data indicating the resource usage status accumulated from a timing at which the leveling was last executed until the present. The leveling execution timing arrives, for example, at a predetermined execution interval.
Then, for example, in this embodiment, the AI/big-data processing module 56 identifies, for each of the plurality of execution platforms, an actual result value of the resource usage status on the execution platform. As described above, the usage status having the actual result value to be identified may be, for example, at least one of the usage status of the CPU, the usage status of the memory, the usage status of the storage, the usage status of the network, or the usage status of the electric power.
In this case, the AI/big-data processing module 56 may identify a total sum of the actual result values of the resource usage statuses in the respective applications executed on the execution platform, which are indicated by the extracted pieces of metric data, as the actual result value of the resource usage status on the execution platform.
In this case, the AI/big-data processing module 56 may generate the usage status actual-result-value data for each execution platform as illustrated in
Then, the AI/big-data processing module 56 selects an application to be replaced from among, for example, the plurality of applications included in the communication system 1. In this case, for example, a movable application may be selected from the plurality of applications included in the communication system 1. In this embodiment, it is determined in advance whether or not the application is movable depending on the type of application. Accordingly, in this embodiment, for example, an application of a type determined in advance to be movable is identified.
The application selected in this manner is hereinafter referred to as “target application.” As the target application, one application may be selected or a plurality of applications may be selected.
In the following description, it is assumed that an application having an ID of “app1” has been selected as the target application. It is also assumed that the execution platform on which the selected target application is currently being executed has an ID of “0001”.
Then, the AI/big-data processing module 56 identifies an execution platform to be a replacement destination candidate for the target application. In this case, for example, an execution platform on which an application of the same type as that of the target application is executable may be identified as the execution platform to be the replacement destination candidate. Further, for example, an execution platform satisfying a given geographical requirement (for example, requirement that a distance between a replacement source and a replacement destination falls within a predetermined distance) may be identified as the execution platform to be the replacement destination candidate.
The execution platform on which the target application is currently being executed is hereinafter referred to as “replacement source platform, ” and the execution platform identified as the replacement destination candidate for the target application is hereinafter referred to as “replacement destination candidate platform.”
In this case, for example, it is assumed that there are 99 replacement destination candidate platforms and IDs thereof are “0002” to “0100”.
Then, the AI/big-data processing module 56 generates replacement pattern data associated with the replacement pattern, which is exemplified in, for example,
For example, a piece of replacement pattern data in which the target application ID has a value of “app1”, the replacement source ID has a value of “0001”, and the replacement destination candidate ID has a value of “0002” is associated with a replacement pattern indicating that “an application having the ID of ‘app1’ is to be replaced from an execution platform having the ID of ‘0001’ onto an execution platform having the ID of ‘0002’.”
In the example of
Then, the AI/big-data processing module 56 identifies the actual result value of the last resource usage status of the target application on the replacement source platform. In this case, the AI/big-data processing module 56 may identify the actual result values of the resource usage status of the target application accumulated from a timing at which the leveling was last executed until the present.
The actual result value of the usage status identified in this manner is hereinafter referred to as “target actual result value.” In this case, for example, the target actual result value may be identified for each of the CPU, the memory, the storage, the network, and the power consumption. Further, the target actual result value may be identified for each of the above-mentioned plurality of period types.
In this case, the AI/big-data processing module 56 may generate the AP resource data indicating the resource amount associated with the application as illustrated in
Then, the AI/big-data processing module 56 identifies the predicted value of the resource usage status on a replacement destination candidate platform in a case in which the target application has been replaced onto the replacement destination candidate platform.
In this case, for example, for each of the replacement patterns associated with the plurality of pieces of replacement pattern data shown in
For example, a value obtained by adding the value of the AP resource data indicating the above-mentioned target actual result value to the value of the usage status actual-result-value data on a replacement destination candidate platform may be identified as the predicted value of the resource usage status on the replacement destination candidate platform. In this case, the predicted value for each of the above-mentioned plurality of period types may also be identified.
Further, a value obtained by adding, to the value of the usage status actual-result-value data on a replacement destination candidate platform, a value obtained by multiplying the value of the AP resource data indicating the target actual result value by a coefficient corresponding to a difference in type (for example, difference in scale, specifications, or the like of the execution platform) between the replacement source platform and the replacement destination candidate platform may also be identified as the predicted value of the resource usage status on the replacement destination candidate platform.
Further, the AI/big-data processing module 56 may identify the predicted value of the resource usage status on the replacement source platform in a case in which the target application has been replaced onto the replacement destination candidate platform.
For example, the predicted value of the resource usage status on the replacement source platform in the case in which the target application has been replaced onto any one of the replacement destination candidate platforms may also be identified.
For example, a value obtained by subtracting the value of the AP resource data indicating the above-mentioned target actual result value from the value of the usage status actual-result-value data on a replacement source platform may be identified as the predicted value of the resource usage status on the replacement source platform. In this case, the predicted value for each of the above-mentioned plurality of period types may also be identified.
In this case, the AI/big-data processing module 56 may generate such usage status predicted-value data indicating the identified predicted values as illustrated in
A method of generating the usage status predicted-value data is not limited to the above-mentioned method.
For example, the AI/big-data processing module 56 may store a trained machine learning model that has learned a correspondence between the actual result value of the resource usage status before addition of an application of the same type as that of the target application and the actual result value of the resource usage status after the addition of the application on an execution platform on which the application is operating. For example, this trained machine learning model may output the usage status predicted-value data in response to the input of the usage status actual-result-value data and data indicating the type of the to-be-added application.
In this case, in regard to the replacement destination candidate platform, the usage status actual-result-value data to be input corresponds to the actual result value of the resource usage status before the addition of the application, and the usage status predicted-value data to be output corresponds to the actual result value of the resource usage status after the addition of the application. Meanwhile, in regard to the replacement source platform, the usage status actual-result-value data to be input corresponds to the actual result value of the resource usage status after the addition of the application, and the usage status predicted-value data to be output corresponds to the actual result value of the resource usage status before the addition of the application.
Then, the AI/big-data processing module 56 may generate the usage status predicted-value data by inputting the usage status actual-result-value data and the data indicating the type of the target application to this trained machine learning model.
In another case, the usage status predicted-value data may be generated based on a given calculation expression or correspondence rule indicating a relationship between the actual result value and predicted value of the usage status and the usage status actual-result-value data.
Then, the AI/big-data processing module 56 identifies the leveling index value indicating the degree of leveling of the resource usage status in a case in which at least one application has been replaced onto another execution platform for each of a given plurality of replacement patterns based on the actual result value of the resource usage status identified as described above. In this case, the leveling index value may indicate, for example, at least one of a degree of leveling of the resource usage status on the execution platform of the replacement destination or a degree of leveling of the resource usage statuses among the plurality of execution platforms.
In this case, the AI/big-data processing module 56 may identify the leveling index value indicating at least any one of the degree of leveling of the resource usage status on the execution platform of the replacement destination or the degree of leveling of the resource usage statuses among the plurality of execution platforms in a case in which a target application has been replaced onto each of execution platforms different from an execution platform on which the target application is being executed.
Further, the AI/big-data processing module 56 may identify, for the plurality of pieces of replacement pattern data shown in
In this case, the AI/big-data processing module 56 may identify the leveling index value based on predicted values of the resource usage statuses for respective period types.
Further, the AI/big-data processing module 56 may identify the leveling index value based on the predicted values of the resource usage statuses for the respective period types on the execution platform of the replacement destination in the case in which at least one application has been replaced onto another execution platform.
For example, the AI/big-data processing module 56 may identify, for each of the five separate pieces of predicted-value data associated with the replacement destination candidate platform identified by the replacement destination candidate ID included in the replacement pattern data, a variance in the values of the 16 pieces of period predicted-value data included in the separate pieces of predicted-value data.
Then, the AI/big-data processing module 56 may identify, as the leveling index value associated with the replacement pattern indicated by the replacement pattern data, a weighted linear sum of the variances identified for the five separate pieces of predicted-value data provided with a given weight. The leveling index value may be identified based on the standard deviation instead of the variance.
In this manner, the AI/big-data processing module 56 may identify the leveling index value indicating the variation in the predicted values for the respective period types. The leveling index value identified in this manner corresponds to an example of the leveling index value indicating the degree of leveling of the resource usage status on the execution platform of the replacement destination in the case in which at least one application has been replaced onto another execution platform for each of the plurality of replacement patterns.
The AI/big-data processing module 56 may also identify the leveling index value indicating a difference between a maximum value and a minimum value of the predicted values for the respective period types.
For example, the AI/big-data processing module 56 may identify, for each of the five separate pieces of predicted-value data associated with the replacement destination candidate identified by the replacement destination candidate ID included in the replacement pattern data, a difference between a maximum value and a minimum value among the values of the 16 pieces of period predicted-value data included in the separate pieces of predicted-value data.
Then, the AI/big-data processing module 56 may identify, as the leveling index value associated with the replacement pattern indicated by the replacement pattern data, a weighted linear sum of the differences identified for the five separate pieces of predicted-value data provided with a given weight.
The leveling index value identified in this manner corresponds to an example of the leveling index value indicating the degree of leveling of the resource usage status on the execution platform of the replacement destination in the case in which at least one application has been replaced onto another execution platform for each of the plurality of replacement patterns.
Further, the AI/big-data processing module 56 may identify the leveling index value indicating the variation in the predicted values of the resource usage statuses among the plurality of execution platforms included in the communication system 1 in the case in which at least one application has been replaced onto another execution platform.
In another case, the AI/big-data processing module 56 may identify the leveling index value indicating a total sum of absolute values of differences between the predicted values of the resource usage statuses on the plurality of execution platforms included in the communication system 1 and a predetermined value in the case in which at least one application has been replaced onto another execution platform.
In those cases, in regard to the replacement destination candidate execution platforms and the replacement source platform that are associated with the replacement pattern, the value of the usage status predicted-value data corresponds to the predicted value of the resource usage status, and in regard to the other execution platforms, the value of the usage status actual-result-value data corresponds to the predicted value of the resource usage status.
Now, an exemplary case of identifying the leveling index value associated with the piece of replacement pattern data in which the replacement destination candidate ID has the value of “0002” is described.
In this case, for example, the AI/big-data processing module 56 may calculate, based on the usage status predicted-value data associated with the replacement destination candidate platform (execution platform having the ID of “0002”), the predicted resource usage rate associated with the replacement destination candidate platform.
Further, for example, the AI/big-data processing module 56 may calculate, based on the usage status predicted-value data associated with the replacement source platform (execution platform having the ID of “0001”), the predicted resource usage rate associated with the execution platform.
For example, the representative value of the predicted value of the CPU usage rate, the representative value of the predicted value of the memory usage rate, the representative value of the predicted value of the storage usage rate, and the representative value of the predicted value of the bandwidth usage rate may be calculated for each of the plurality of period types. Then, the representative value regarding those four representative values may be calculated as the predicted resource usage rate associated with the candidate platform.
Further, for example, for each of the remaining execution platforms, the AI/big-data processing module 56 may calculate, based on the usage status actual-result-value data associated with the execution platform, an actual-result resource usage rate associated with the execution platform. In this case, for example, the AI/big-data processing module 56 may calculate, for each of 98 execution platforms being the remaining replacement destination candidate platforms, the actual-result resource usage rate associated with the execution platform.
For example, the representative value of the actual result value of the CPU usage rate, the representative value of the actual result value of the memory usage rate, the representative value of the actual result value of the storage usage rate, and the representative value of the actual result value of the bandwidth usage rate may be calculated for each of the plurality of period types. Then, the representative value regarding those four representative values may be calculated as the actual-result resource usage rate associated with the execution platform.
Examples of the above-mentioned representative values include an average value and a maximum value. In addition, examples of the above-mentioned “representative value regarding four representative values” include an average value of the above-mentioned four average values, a maximum value of the above-mentioned four average values, an average value of the above-mentioned four maximum values, and a maximum value of the above-mentioned four maximum values.
Then, the AI/big-data processing module 56 may calculate the leveling index value associated with the piece of replacement pattern data in which the replacement destination candidate ID has the value of “0002” based on the above-mentioned actual-result resource usage rate and the above-mentioned predicted resource usage rate.
For example, the values of the predicted resource usage rates of the execution platform having the ID of “0001” and the execution platform having the ID of “0002” and the values of the actual-result resource usage rates of the remaining execution platforms may be identified. Then, the variance or standard deviation of the identified values may be calculated as the leveling index value associated with the piece of replacement pattern data in which the replacement destination candidate ID has the value of “0002”.
In another case, a difference between each of the values of the predicted resource usage rates regarding the execution platform having the ID of “0001” and the execution platform having the ID of “0002” and a predetermined value (for example, 70%) and a difference between each of the values of the actual-result resource usage rates regarding the remaining execution platforms and the predetermined value (for example, 70%) may be identified. Then, a total sum of absolute values of the identified differences may be calculated as the leveling index value associated with the piece of replacement pattern data in which the replacement destination candidate ID has the value of “0002”.
The leveling index value identified in this manner corresponds to an example of the leveling index value indicating the degree of leveling of the resource usage statuses among the plurality of execution platforms.
Further, in this embodiment, the leveling index value associated with the replacement pattern may be identified based on a first leveling index value indicating the degree of leveling of the resource usage status on the execution platform of the replacement destination and a second leveling index value indicating the degree of leveling of the resource usage statuses among the plurality of execution platforms.
For example, an average value of the first leveling index value and the second leveling index value may be identified as the leveling index value associated with the candidate platform. In another case, for example, a weighted average value of the first leveling index value and the second leveling index value based on a given weight may be identified as the leveling index value associated with the replacement pattern.
Then, in this embodiment, for example, the policy manager module 80 determines, based on the leveling index value identified for each of the plurality of replacement patterns, a replacement pattern relating to replacement to be executed.
In this case, the policy manager module 80 may determine, based on the leveling index value in the case in which a target application has been replaced onto each of execution platforms different from an execution platform on which the target application is being executed, the execution platform of the replacement destination for the target application.
For example, when the leveling index value indicating the variation in the predicted values for the respective period types is identified, the replacement pattern relating to the replacement to be executed may be determined based on the smallness of the variation indicated by the leveling index value.
Further, for example, when the leveling index value indicating the variation in the predicted values of the resource usage statuses among the plurality of execution platforms is identified, the replacement pattern relating to the replacement to be executed may be determined based on the smallness of the variation indicated by the leveling index value.
For example, the replacement pattern associated with the smallest value among the leveling index values indicating the variations respectively associated with the plurality of replacement patterns may be determined as the replacement pattern relating to the replacement to be executed.
Further, for example, when the leveling index value indicating the difference between the maximum value and the minimum value among the predicted values for the respective period types is identified, the replacement pattern relating to the replacement to be executed may be determined based on smallness of the difference indicated by the leveling index value. For example, the replacement pattern associated with the smallest value among the leveling index values indicating the differences respectively associated with the plurality of replacement patterns may be determined as the replacement pattern relating to the replacement to be executed.
Further, for example, when the leveling index value indicating the total sum of the absolute values of the differences between the predicted values of the resource usage statuses on the respective plurality of candidate platforms and the predetermined value is identified, the replacement pattern relating to the replacement to be executed may be determined based on smallness of the total sum of the absolute values of the differences indicated by the leveling index values. For example, the replacement pattern associated with the smallest value among the leveling index values indicating the total sum of the absolute values of the differences respectively associated with the plurality of replacement patterns may be determined as the replacement pattern relating to the replacement to be executed.
Then, in this embodiment, for example, the life cycle management module 84, the container management module 64, and the configuration management module 62 replace at least one application based on a replacement pattern determined as the replacement pattern relating to the replacement to be executed as described above.
For example, in the example of
In this manner, the life cycle management module 84, the container management module 64, and the configuration management module 62 may replace the target application onto the execution platform determined as the execution platform of the replacement destination.
In this embodiment, the replacement of a plurality of applications may be executed by executing the above-mentioned process sequentially on the plurality of applications included in the communication system 1 with the application set as the target application.
Further, as described above, one replacement pattern may be associated with the replacement of a plurality of applications. In this case, a plurality of applications may be replaced based on a replacement pattern determined as the replacement pattern relating to the replacement to be executed.
The leveling index value described above becomes smaller as the degree of leveling becomes higher. Such a leveling index value as to become larger as the degree of leveling becomes higher may be used instead. In this case, the replacement pattern associated with the largest value among the leveling index values respectively associated with the plurality of replacement patterns is determined as the replacement pattern relating to the replacement to be executed.
Further in this embodiment, the AI/big-data processing module 56 may identify the leveling index value based on a degree of improvement in the leveling of the resource usage status on the execution platform of the replacement destination and a degree of improvement in the leveling of the resource usage status on the execution platform of a replacement source.
For example, on the replacement source platform, a value obtained by subtracting the variance in the values of the period predicted-value data included in the usage status predicted-value data from the variance in the values of the period actual-result-value data included in the usage status actual-result-value data may be identified as a value of a first degree of improvement. Then, on the replacement destination candidate platform, a value obtained by subtracting the variance in the values of the period predicted-value data included in the usage status predicted-value data from the variance in the values of the period actual-result-value data included in the usage status actual-result-value data may be identified as a value of a second degree of improvement. Then, a total sum of the value of the first degree of improvement and the value of the second degree of improvement may be identified as the leveling index value. In this case, the replacement pattern associated with the largest value among the leveling index values respectively associated with the plurality of replacement patterns is determined as the replacement pattern relating to the replacement to be executed. In this case, the leveling index value may be identified based on the standard deviation instead of the variance.
As described above, the leveling of the resource usage statuses through the replacement is executed based on the actual result value of the resource usage status. Thus, according to this embodiment, it is possible to effectively utilize resources of an execution platform on which each application is constructed.
In this embodiment, when the leveling index values identified for all replacement patterns for a target application, for example, do not satisfy a predetermined standard, the replacement of the target application may be inhibited from being executed.
For example, in a case in which the leveling index value becomes smaller as the degree of leveling becomes higher, when the leveling index values identified for all the replacement patterns, for example, exceeds a predetermined threshold value, the replacement may be inhibited from being executed.
Further, when the above-mentioned total sum of the value of the first degree of improvement and the value of the second degree of improvement is negative for all the replacement patterns, the replacement may be inhibited from being executed. In another case, in this embodiment, it may be determined to add an application to the communication system 1, and the addition of the application may be scheduled.
In such a case, for example, the construction of the to-be-added application may be set to be pending even when the execution platform on which the to-be-added application is to be constructed is determined as described above in the section of “Addition of Application to Execution Platform.”
The execution platform determined as the execution platform on which the to-be-added application is to be constructed, that is, an execution platform on which an application is scheduled to be added is hereinafter referred to as “addition-scheduled execution platform.”
As described above, when the addition-scheduled execution platform is determined, the AI/big-data processing module 56 has already identified the predicted value of the resource usage status on the addition-scheduled execution platform in the case in which the to-be-added application has been constructed on the addition-scheduled execution platform.
As described above, the AI/big-data processing module 56 may identify the predicted value of the resource usage status on the addition-scheduled execution platform based on the actual result value of the resource usage status in a running application of the same type as that of the to-be-added application.
Then, in this case, in the process described above in the section of “Leveling of Resource Usage Statuses through Replacement,” the AI/big-data processing module 56 may identify the leveling index value regarding the addition-scheduled execution platform based on the predicted value in the case in which the to-be-added application has been constructed.
For example, in the identification of the leveling index value performed in the process of “Leveling of Resource Usage Statuses through Replacement,” the usage status predicted-value data indicating the predicted value in the case in which the to-be-added application has been constructed, which has been identified in the determination of the addition-scheduled execution platform, may be handled as the usage status actual-result-value data on the addition-scheduled execution platform.
Then, the to-be-added application may be constructed at a timing at which the to-be-added application is scheduled to be added.
As described above, the leveling of the resource usage statuses through the replacement may be performed on the premise that the to-be-added application is to be added. This configuration ensures that, when the to-be-added application is added, a resource amount required by the to-be-added application can be reliably secured for the addition-scheduled execution platform. Accordingly, it is possible to suppress an influence to be exerted on the network service being provided by the construction of the to-be-added application, and as a result, the to-be-added application can be smoothly constructed.
Further, in this embodiment, in a case in which an application in which a required resource amount is identified based on a conservative model has been constructed, the application sometimes does not use the resource amount identified based on the conservative model in actuality. As a result, there may occur a situation in which the resources of the execution platforms are not being utilized effectively. Even in such a case, in this embodiment, the leveling of the resource usage statuses through the replacement is executed, to thereby be able to solve the situation in which the resources of the execution platforms are not being utilized effectively.
Now, an example of a flow of a process performed by the platform system 30 in this embodiment with the determination to add the application to the communication system 1 as a trigger is described with reference to a flow chart exemplified in
First, the AI/big-data processing module 56 identifies a plurality of candidate platforms from among a plurality of execution platforms (for example, Kubernetes clusters) included in the communication system 1 (Step S101).
Then, the AI/big-data processing module 56 generates, for each of the plurality of candidate platforms, the usage status actual-result-value data associated with the candidate platform (Step S102).
Then, the AI/big-data processing module 56 generates, for each of the plurality of candidate platforms, the usage status predicted-value data associated with the candidate platform (Step S103).
Then, the AI/big-data processing module 56 identifies, for each of the plurality of candidate platforms, the leveling index value associated with the candidate platform (Step S104). As described above, in the process step of Step S104, for each of the plurality of candidate platforms, the leveling index value associated with the candidate platform may be identified based on the usage status predicted-value data associated with the candidate platform. In another case, for each of the plurality of candidate platforms, the leveling index value associated with the candidate platform may be identified based on the usage status predicted-value data associated with the candidate platform the usage status actual-result-value data associated with the other candidate platforms.
Then, the policy manager module 80 determines the execution platform on which the to-be-added application is to be constructed from among the plurality of candidate platforms identified in the process step of Step S101 based on the leveling index value identified for each of the plurality of candidate platforms in the process step of Step S104 (Step S105).
Then, the life cycle management module 84, the container management module 64, and the configuration management module 62 construct the to-be-added application on the execution platform determined in the process step of Step S105 (Step S106), and the process illustrated in this process example is ended.
Next, an example of a flow of a process relating to the leveling of the resource usage statuses through the replacement, which is performed by the platform system 30 in this embodiment, is described with reference to flow charts exemplified in
In this process example, the AI/big-data processing module 56 stands by for arrival of the leveling execution timing (Step S201).
When the leveling execution timing has arrived, the AI/big-data processing module 56 selects, as the target application, an application for which the process steps of from Step S203 to Step S212 have not been executed from among the applications that can be the target applications (Step S202).
Then, the AI/big-data processing module 56 identifies at least one replacement destination candidate platform regarding the target application selected in the process step of Step S202 (Step S203).
Then, the AI/big-data processing module 56 generates the usage status actual-result-value data associated with the replacement source platform, which is the execution platform on which the target application selected in the process step of Step S202 is being executed, and the usage status actual-result-value data associated with each of at least one replacement destination candidate platform identified in the process step of Step S203 (Step S204).
Then, the AI/big-data processing module 56 generates the AP resource data associated with the target application selected in the process step of Step S202 (Step S205).
Then, the AI/big-data processing module 56 generates the usage status predicted-value data associated with the execution platform (replacement source platform) on which the target application selected in the process step of Step S202 is being executed (Step S206). In the process step of Step S206, for example, the usage status predicted-value data associated with the replacement source platform is generated based on the usage status actual-result-value data associated with the replacement source platform, which is generated in the process step of Step S204, and the AP resource data generated in the process step of Step S205.
Then, the AI/big-data processing module 56 generates the replacement pattern data associated with the target application selected in the process step of Step S202 based on the ID of the target application, the ID of the execution platform on which the target application is being executed, and the ID of the replacement destination candidate platform identified in the process step of Step S203 (Step S207). In this case, for example, as many pieces of replacement pattern data as the number of replacement destination candidate platforms identified in the process step of Step S203 are generated.
Then, the AI/big-data processing module 56 selects a piece of replacement pattern data for which the process steps of Step S209 and Step S210 have not been executed from among the pieces of replacement pattern data generated in the process step of Step S207 (Step S208).
Then, the AI/big-data processing module 56 generates the usage status predicted-value data associated with the replacement destination candidate platform identified by the replacement destination candidate ID included in the piece of replacement pattern data selected in the process step of Step S208 (Step S209). In the process step of Step S209, for example, the usage status predicted-value data associated with the replacement destination candidate platform is generated based on the usage status actual-result-value data associated with the replacement destination candidate platform, which is generated in the process step of Step S204, and the AP resource data generated in the process step of Step S205.
Then, the AI/big-data processing module 56 identifies the leveling index value associated with the replacement pattern indicated by the piece of replacement pattern data selected in the process step of Step S208 (Step S210).
In Step S210, for example, the leveling index value may be identified based on the usage status predicted-value data associated with the replacement destination candidate platform, which is generated in the process step of Step S209.
Further, for example, the leveling index value may be identified based on the usage status predicted-value data associated with the replacement source platform, which is generated in the process step of Step S206, the usage status predicted-value data associated with the replacement destination candidate platform, which is generated in the process step of Step S209, and the usage status actual-result-value data associated with each of the remaining replacement destination candidate platforms, which is generated in the process step of Step S204.
Then, the AI/big-data processing module 56 examines whether or not the process steps of Step S209 and Step S210 have been executed for all the replacement patterns (Step S211).
When it is confirmed that the process steps of Step S209 and Step S210 have not been executed for all the replacement patterns (N in Step S211), the process returns to the process step of Step S208.
When it is confirmed that the process steps of Step S209 and Step S210 have been executed for all the replacement patterns (Y in Step S211), the policy manager module 80 determines the execution platform being the replacement destination for the target application selected in the process step of Step S202 based on the leveling index value associated with at least one replacement pattern, which is identified in the process step of Step S210 (Step S212).
Then, the policy manager module 80 examines whether or not the process steps of from Step S203 to Step S212 have been executed for all the applications that can be the target applications (Step S213).
When it is confirmed that the process steps of from Step S203 to Step S212 have not been executed for all the applications (N in Step S213), the process returns to the process step of Step S202.
When it is confirmed that the process steps of from Step S203 to Step S212 have been executed for all the replacement patterns (Y in Step S213), the life cycle management module 84, the container management module 64, and the configuration management module 62 replaces each of all the applications that can be the target applications onto the execution platform determined as the replacement destination for the application in the process step of Step S212 (Step S214), and the process returns to the process step of Step S201.
In the process step of Step S211, it may be determined not to execute the replacement of the target application selected in the process step of Step S202. Then, in regard to the application for which the replacement has been determined not to be executed, the replacement may not be executed in the process step of Step S214.
It should be noted that the present invention is not limited to the above-mentioned embodiment.
For example, the execution platform in this embodiment may be a Kubernetes cluster. The execution platform in this embodiment may also be a server.
Further, the to-be-added application or the target application in this embodiment may be a network function included in the communication system 1 or an application other than the network function, such as an application for big data analysis or an AI.
Further, the leveling index value is described above as being calculated based on all of the usage status of the CPU, the usage status of the memory, the usage status of the storage, the usage status of the network, and the usage status of the electric power, but the leveling index value may be calculated based on some of those statuses. For example, the leveling index value may be calculated based on any one of the usage status of the CPU, the usage status of the memory, the usage status of the storage, the usage status of the network, or the usage status of the electric power.
Further, the functional unit in this embodiment is not limited to those illustrated in
Further, the functional unit in this embodiment is not required to be an NF in 5G. For example, the functional unit in this embodiment may be an eNodeB, a vDU, a vCU, a packet data network gateway (P-GW), a serving gateway (S-GW), a mobility management entity (MME), a home subscriber server (HSS), or another network node in 4G.
Further, the functional unit in this embodiment is not required to be a CNF, and may be a virtualized network function (VNF), which is a virtual-machine-based (VM-based) functional unit using a hypervisor-type or host-type virtualization technology. Further, the functional unit in this embodiment is not required to be implemented by software, and may be implemented by hardware, for example, by an electronic circuit. Further, the functional unit in this embodiment may be implemented by a combination of an electronic circuit and software.
Filing Document | Filing Date | Country | Kind |
---|---|---|---|
PCT/JP2022/020273 | 5/13/2022 | WO |