The disclosure relates machine learning (ML) services and for example, is related to a method and an electronic device for managing ML services in wireless communication network.
5G is a service based architecture in which 100s to 1000's of services are deployed under same umbrella. As a result, management of a network and understanding patterns manually is a cumbersome task. Therefore, operators require artificial intelligence (AI) and/or machine learning (ML) based solutions which can understand and predict a problem in advance so that the operator can take decisions to mitigate the problem in advance. However, in large countries there is millions of base station and trillions of devices in the network but ML resources are limited (owing to CAPEX (capital expenditure) and OPEX (operating expenditure)). The operator needs to balance out these resources judiciously in order to mitigate the problem at various cities in different parts of the country. Therefore, current manual intelligent solutions which still are dependent on the human intervention of choosing the ML resources and models may not prove beneficial for the operator. For millions of base stations and billions of devices operator might be able to spend only on some thousands of base stations for ML resources. The operator needs to keep rotating the resources as per the site demands in the network.
Owing to lots of heterogeneous services and the devices generating diverse traffic patterns it is necessary to choose the appropriate ML models and optimal amount of resources as per the current ML resource usage in the network. Keeping in the mind, availability of ML resources in the network and the service we are serving and the type of problem we have to address using ML/AI, we also need to declare the ML periodicity (e.g., after how much interval we need to collect the data and how frequent we need to do training and prediction), the bearable error limit and accuracy of the model required. Provisioning of the above things manually is very difficult and can lead to inappropriate non-optimal model selection and ML resource allocation which can further lead to choosing of non-optimal solution to mitigate the problem which might cause subscriber loss or degradation of QoS/QoE in the network or increase in OPEX of the network operator. Therefore, there is a need to automate the process of ML model, related parameters (like periodicity, errors and accuracies) and ML resource provisioning.
Embodiments of the disclosure provide a method and an electronic device for automatically managing machine learning (ML) services in wireless communication network. The automation of ML package selection from a ML repository based on various parameters enables selection on an optimized ML package based on requirements of a service request. As a result, ML resources available at an operator's side are judiciously utilized.
Accordingly, an embodiment herein discloses a method for managing machine learning (ML) services by an electronic device in a wireless communication network. The method may include storing a plurality of ML packages. Each of the plurality of ML packages executes at least one network service request. The method may include receiving a trigger based on the at least one network service request from a server. The method may include determining a plurality of parameters corresponding to the at least one network service request, in response to receiving the trigger from the server. The method may include determining at least one ML package from the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request; and deploying the determined at least one ML package for executing the at least one network service request.
In an embodiment, the trigger based on the at least one network service request may indicate at least one of: formation of a new network slice and an anomaly corresponding to the new network slice; and a scenario of a service level assurance (SLA) provided by a network operator not being met.
The plurality of parameters corresponding to the at least one network service request may comprise information of service profile of a network, ML requirements of the at least one network operator, network traffic pattern for a specific service and unfilled ML templates associated with the specific service.
In an embodiment, the network traffic pattern for a specific service is determined by receiving the information of service profile of the network and the ML requirements of the at least one network operator as inputs; and determining a plurality of network elements exhibiting same network traffic pattern over a period of time. The method may include grouping each of the plurality of network elements exhibiting the same network traffic pattern over the period of time, training one among each of the plurality of network elements exhibiting the same network traffic pattern using a specific training model, and instructing an ML orchestrator to train rest of the plurality of network elements exhibiting the same network traffic pattern using the specific training model used by the ML services management controller for training the one network element, wherein the use of the specific training model used by the ML services management controller for training the one network element, to train the rest of the plurality of network elements results in saving of ML resources used for training.
In an embodiment, each of the plurality of ML packages comprises at least one of: a predicted requirement of the network resources for implementing a ML technique, predicted optimal ML model and related libraries, an error prediction window, periodicity of predicting the error, at least one of: a training accuracy and a prediction accuracy.
In an embodiment, determining the at least one ML package from the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request may include inputting the trigger received from the server and the plurality of parameters corresponding to the at least one network service request to one of a deep reinforcement leaning engine and deep dynamic learning engine; and determining the at least one ML package of the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request by one of the deep reinforcement leaning engine and the deep dynamic learning engine. The method may include filling values corresponding to the determined at least one ML package in at least one unfilled ML template associated with the specific service.
In an embodiment, the method may include monitoring a plurality of network service requests from the server; and identifying one or more network service requirements associated with each of the network service requests. The method may includes monitoring one or more machine learning packages deployed from the ML model repository in response to each of the network service requests from the plurality of network service requests; and generating a co-relation between each of the network service request, the corresponding network service requirements and the one or more machine learning packages deployed from the ML model repository for optimization of each network service over a period of time. The method may includes receiving an incoming network service request; and deploying the ML package corresponding to the network service requirements of the incoming network service request based on the generated co-relation.
Accordingly, an embodiment herein discloses an electronic device for managing machine learning (ML) services in a wireless communication network. The ML services management controller includes: a memory, and at least one processor coupled to the memory. The at least one processor may be configured to: store a plurality of ML packages. Each of the plurality of ML package executes at least one network service request. The at least one processor may be configured to receive a trigger based on the at least one network service request from a server. The at least one processor may be configured to determine a plurality of parameters corresponding to the at least one network service request, in response to receiving the trigger from the server. The at least one processor may be configured to determine at least one ML package from the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request.; The at least one processor may be configured to deploy the determined at least one ML package for executing the at least one network service request.
Accordingly, an embodiment herein discloses a non-transitory computer-readable storage medium storing instructions. The instructions, when executed by at least one processor of an electronic device for managing machine learning (ML) services, cause the electronic device to perform operations. The operations may comprise storing a plurality of ML packages. Each of the plurality of ML packages executes at least one network service request. The operations may comprise receiving a trigger based on the at least one network service request from a server. The operations may comprise determining a plurality of parameters corresponding to the at least one network service request, in response to receiving the trigger from the server. The operations may comprise determining at least one ML package from the plurality of ML packages based on the trigger and the plurality of parameters corresponding to the at least one network service request; and The operations may comprise deploying the determined at least one ML package for executing the at least one network service request.
These and other aspects of the various example embodiments herein will be better appreciated and understood when considered in conjunction with the following description and the accompanying drawings. It should be understood, however, that the following descriptions, while indicating example embodiments and numerous specific details thereof, are given by way of illustration and not of limitation. Many changes and modifications may be made within the scope of the disclosure herein without departing from the spirit thereof, and the embodiments herein include all such modifications.
The method and the device are illustrated in the accompanying drawings, throughout which reference letters indicate corresponding parts in the various figures. The above and other aspects, features and advantages of certain embodiments of the present disclosure will be more apparent from the following detailed description, taken in conjunction with the accompanying drawings, in which:
The example embodiments herein and the various features and advantageous details thereof are explained more fully with reference to the non-limiting embodiments that are illustrated in the accompanying drawings and detailed in the following description. Descriptions of well-known components and processing techniques may be omitted so as to not unnecessarily obscure the disclosure. The various embodiments described herein are not necessarily mutually exclusive, as various embodiments can be combined with one or more other embodiments to form new embodiments. The term “or” as used herein, refers to a non-exclusive or, unless otherwise indicated. The examples used herein are intended merely to facilitate an understanding of ways in which the embodiments herein can be practiced and to further enable those skilled in the art to practice the embodiments herein. Accordingly, the examples should not be construed as limiting the scope of the embodiments herein.
The example embodiments may be described and illustrated in terms of blocks which carry out a described function or functions. These blocks, which may be referred to herein as managers, units, controllers, hardware components or the like, are physically implemented by analog and/or digital circuits such as logic gates, integrated circuits, microprocessors, microcontrollers, memory circuits, passive electronic components, active electronic components, optical components, hardwired circuits and the like, and may optionally be driven by firmware. The circuits may, for example, be embodied in one or more semiconductor chips, or on substrate supports such as printed circuit boards and the like. The circuits of a block may be implemented by dedicated hardware, or by a processor (e.g., one or more programmed microprocessors and associated circuitry), or by a combination of dedicated hardware to perform some functions of the block and a processor to perform other functions of the block. Each block of the embodiments may be physically separated into two or more interacting and discrete blocks without departing from the scope of the disclosure. Likewise, the blocks of the embodiments may be physically combined into more complex blocks without departing from the scope of the disclosure.
The accompanying drawings are provided to help easily understand various technical features and it should be understood that the embodiments presented herein are not limited by the accompanying drawings. As such, the present disclosure should be understood to extend to any alterations, equivalents and substitutes in addition to those which are particularly set out in the accompanying drawings. Although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are generally only used to distinguish one element from another.
Accordingly, the various example embodiments herein disclose a method for managing machine learning (ML) services in a wireless communication network. The method includes configuring, by a ML services management controller, a repository of a plurality of ML packages and providing, by the ML services management controller, an access to the repository to at least one network operator. Each ML package executes at least one network service request based on a pre-defined service requirement. The method also includes receiving, by the ML services management controller, a trigger from a Network Management Server (NMS) based on the at least one network service request and determining, by the ML services management controller, a plurality of parameters corresponding to the received at least one network service request, in response to receiving the trigger from the NMS. Further, the method also includes determining, by the ML services management controller, at least one ML package from the plurality of ML packages in the repository based on the trigger and the plurality of parameters corresponding to the received at least one network service request; and automatically deploying, by the ML services management controller, the selected ML package for executing the at least one of the network service request.
Accordingly, various example embodiments herein disclose a machine learning (ML) services management controller for managing services in a wireless communication network. The ML services management controller includes a memory, a processor, a communicator and a ML services manager. The ML services manager is configured to configure a repository of a plurality of ML packages and provide an access to the repository to at least one network operator. Each ML package executes at least one network service request based on a pre-defined (e.g., specified) service requirement. The ML services manager is also configured to receive a trigger from a Network Management Server (NMS) based on the at least one network service request and determine a plurality of parameters corresponding to the received at least one network service request, in response to receiving the trigger from the NMS. Further, the ML services manager is also configured to determine at least one ML package from the plurality of ML packages in the repository based on the trigger and the plurality of parameters corresponding to the received at least one network service request; and automatically deploy the selected ML package for executing the at least one of the network service request.
Unlike to the conventional methods and systems, in the disclosed method an operator saves on CAPEX (e.g., capital expense) and OPEX (e.g., operational expense) due to intelligence. In large countries the operators will have to conventionally spend large amount of money on ML resources and servers. Therefore, automation will save time. Also, training time can be enhanced to get more accuracy.
Network slicing will be deployed for 5G networks in the near future. There exists an idea which talks about implementation of machine learning to optimize the networks. Machine learning is implemented manually (not scalable) where; training models, requirement for ML and ML model deployments are performed by operators or service providers. In large scale networks the aforesaid task becomes resource as well as time consuming. MaaS for NaaS automate the resource provisioning (provide it as a service) for ML deployment/training/predication based on: 1) Slice/use case (combination of services) 2) Region 3) Network traffic pattern classifications 4) Operator policies and provide the 1) ML and cloud resources 2) ML model 3) ML prediction error 4) ML prediction periodicity; to automate the ML deployment process.
1
b. Considering a 5G network, there are large number of services and large number of cells generating humungous data then manual selection of the ML packages can lead to selection of non-optimal or sub-optimal ML package.
2
b. Sub-optimal/non-optimal ML package selection can further lead to sub-optimal ML resource utilization and can degrade the ML training and prediction performance.
3
b. Degradation of training and prediction performance can lead to selection on non-optimal or sub-optimal mitigation solution
4
b. Selection of the sub-optimal or non-optimal mitigation solution can cause the problem to still persist which may lead to poor QoS/QoE (refer next slide).
5
b. Selection of sub-optimal or non-optimal mitigation solution can lead to condition where operators are not able to meet service level agreements (SLA)
6
b. Poor QoS/QoE or SLA not met for the service may lead to subscriber churning (subscriber leaving the network due to poor services)
7
b. Selection of the sub-optimal or non-optimal mitigation solution can also increase the OPEX of the operator.
Referring to
At 3, at the ML Orchestrator the operator manually selects the ML model with no ML resource optimization. At 4, the non-optimal ML service deployment plan is provided to the AI Server. Therefore, in the conventional methods and systems for ML deployment, as part of the ML orchestration a service engineer need to MANUALLY select the right ML package for different service in different locations. Since the ML packages are manually selected by the service engineer the ML package is not the best package as a result the ML resource allocation is not efficient.
Referring now to the drawings and more particularly to
The ML services management controller may be implemented in an electronic device.
Referring to
The memory (120) includes a ML model repository (122) which includes a plurality of ML packages. The memory (120) also stores instructions to be executed by the processor (140) for managing the ML services in the wireless communication network. The memory (120) storage elements may include magnetic hard discs, optical discs, floppy discs, flash memories, or forms of electrically programmable memories (EPROM) or electrically erasable and programmable (EEPROM) memories. In addition, the memory (120) may, in some examples, be considered a non-transitory storage medium. The term “non-transitory” may indicate that the storage medium is not embodied in a carrier wave or a propagated signal. However, the term “non-transitory” should not be interpreted that the memory (120) is non-movable. In various examples, the memory (120) can be configured to store larger amounts of information than the memory. In certain examples, a non-transitory storage medium may store data that can, over time, change (e.g., in Random Access Memory (RAM) or cache). The memory (120) can be an internal storage or it can be an external storage unit of the electronic device (100), cloud storage, or any other type of external storage.
In an embodiment, the processor (140) may include various processing circuitry and communicates with, the memory (120), the communicator (160) and the multi-connectivity enabler (104). The processor (140) is configured to execute instructions stored in the memory (120) for enabling multi-connectivity to avoid the jitters/RLFs/. The processor (140) may include one or a plurality of processors, may be a general purpose processor, such as a central processing unit (CPU), an application processor (AP), or the like, a graphics-only processing unit such as a graphics processing unit (GPU), a visual processing unit (VPU), and/or an Artificial intelligence (AI) dedicated processor such as a neural processing unit (NPU).
In an embodiment, the communicator (160) may include various communication circuitry and is configured for communicating internally between internal hardware components and with external devices via one or more networks. The communicator (160) includes an electronic circuit specific to a standard that enables wired or wireless communication.
In an embodiment, the ML services manager (180) includes a ML template provisioning engine (182), a network traffic classifier (184) and an intelligent ML service provisioning engine (186).
In an embodiment, the ML services manager (180) is configured to configure a repository of a plurality of ML packages and provide an access to the repository to at least one network operator. Each ML package executes at least one network service request based on a pre-defined service requirement. Further, the ML services manager (180) is configured to receive a trigger from a Network Management Server (NMS) based on the at least one network service request and determine a plurality of parameters corresponding to the received at least one network service request (received in operations 2, 3, 4 and 5), in response to receiving the trigger from the NMS. Further, the ML services manager (180) is configured to determine at least one ML package from the plurality of ML packages in the repository based on the trigger and the plurality of parameters corresponding to the received at least one network service request and automatically deploy the selected ML package for executing the at least one of the network service request. The plurality of parameters corresponding to the received at least one network service request comprises information of service profile of the network, ML requirements of the at least one network operator, network traffic pattern for a specific service and unfilled ML templates associated with the specific service.
In an embodiment, the network traffic classifier (184) is configured to receive the information of service profile of the network and the ML requirements of the at least one network operator as inputs and determine a plurality of network elements exhibiting same network traffic pattern over a period of time. Further, the network traffic classifier (184) is configured to group each of the plurality of network elements exhibiting the same network traffic pattern over the period of time and train one among each of the plurality of network elements exhibiting the same network traffic pattern using a specific training model. The network traffic classifier (184) is configured to instruct a ML orchestrator (192) to train rest of the plurality of network elements exhibiting the same network traffic pattern using the specific training model used by the ML services management controller (100) for training the one network element, wherein the use of the specific training model used by the ML services management controller (100) for training the one network element, to train the rest of the plurality of network elements results in saving of ML resources used for training Each ML package comprises at least one of: a predicted requirement of the network resources for implementing a ML technique, predicted optimal ML model and related libraries, an error prediction window, periodicity of predicting the error, at least one of: a training accuracy and a prediction accuracy.
In an embodiment, the intelligent ML service provisioning engine (186) is configured to receive a trigger from the NMS (operation 1). The NMS receives the trigger from a slice manager which is then sent to the intelligent ML service provisioning engine (186). The trigger is initiated based on the at least one network service request indicates at least one of: formation of a new network slice and an anomaly corresponding to the new network slice; and a scenario of the SLA provided by the network operator not being met. At operation 2, the intelligent ML service provisioning engine (186) is configured to receive a Service Profile from the NMS to create a ML service deployment plan. The Service Profile comprises service properties for example but not limited to slice/service id, location/region and operator policies. At operation 3, the intelligent ML service provisioning engine (186) is configured to receive ML requirements from an operator from the NMS. The ML requirements for example includes but may not be limited to anomaly id, anomaly type, current ML resource usage for different regions, KPI list for optimization of a service, etc. The ML requirements are run time value of the ML resource usage of the operator to the intelligent ML service provisioning engine (186). Further, based on the inputs received at operation 2 and operation 3, the intelligent ML service provisioning engine (186) requests for possible network traffic patterns classification from the network traffic classifier (184) that is in turn shared as input to the intelligent ML service provisioning engine (186) by the network traffic classifier (184) (operation 4).
In an embodiment, the ML template provisioning engine (182) is configured to share at least one ML template as an input to the intelligent ML service provisioning engine (186) on determining that the trigger is received, based on the service type and the service id.
The intelligent ML service provisioning engine (186) is then configured to perform one of reinforcement learning or dynamic deep learning to come up with a ML service based on the inputs received at operation 1, operation 2 and operation 3. The ML service comprises ML model, ML prediction error window, ML prediction periodicity, ML training/prediction accuracies. In parallel, the intelligent ML service provisioning engine (186) is also configured to predict future ML resources (hardware, software and cloud) in the operator network so that appropriate ML resources can be allocated to the current ML task requested by the operator. The ML resource usage value is in percentage. The operator manages the ML resources across services or across cells based on the ML resource usage value. This is operator specific implementation based on the ML resource allocation and the deployments in the operator network. Further, the values corresponding to the determined ML package is filled in the unfilled ML template associated with the specific service received as input at operation 4 from the ML template provisioning engine (182) and the ML service provisioning plan is be shared as an output to an AI server (190).
In an embodiment, the intelligent ML service provisioning engine (186) is configured to monitor a plurality of network service requests from the NMS and identify one or more network service requirements associated with each of the network service requests. Further, the intelligent ML service provisioning engine (186) is configured to monitor one or more machine learning packages deployed from the ML model repository in response to each of the network service requests from the plurality of network service requests and generate a co-relation between each of the network service request, the corresponding network service requirements and the one or more machine learning packages deployed from the ML model repository for optimization of each network service over a period of time;. Furthermore, the intelligent ML service provisioning engine (186) is configured to receive an incoming network service request; and automatically deploy the ML package corresponding to the network service requirements of the incoming network service request based on the generated co-relation.
Although
Referring to
At operation 304, the method includes the ML services management controller (100) providing the access to the repository to at least one network operator. For example, in the ML services management controller (100) as illustrated in
At operation 306, the method includes the ML services management controller (100) receiving the trigger from the Network Management Server (NMS) based on the at least one network service request. For example, in the ML services management controller (100) as illustrated in
At operation 308, the method includes the ML services management controller (100) determining the plurality of parameters corresponding to the received at least one network service request, in response to receiving the trigger from the NMS. For example, in the ML services management controller (100) as illustrated in
At operation 310, the method includes the ML services management controller (100) determining the at least one ML package from the plurality of ML packages in the repository based on the trigger and the plurality of parameters corresponding to the received at least one network service request. For example, in the ML services management controller (100) as illustrated in
At operation 312, the method includes the ML services management controller (100) automatically deploying the selected ML package for executing the at least one of the network service request. For example, in the ML services management controller (100) as illustrated in
The various actions, acts, blocks, steps, operations or the like in the flow diagram may be performed in the order presented, in a different order or simultaneously. Further, in various embodiments, some of the actions, acts, blocks, steps, or the like may be omitted, added, modified, skipped, or the like without departing from the scope of the disclosure.
Referring to
Further, the ML template provisioning engine (182) also allows the ML designer to create hierarchal intents for hierarchal services. A ML service request is received at a frontend (182a) as per the trigger and passed on to a design tool (182b). The design tool (182b) certifies the request and checks for the request in the database. Further, the design tool (182b) fetches the ML template from the database using ML import/export. The ML service template is converted to a desired format as per the request made using format converters. The ML service template is then exported to ML intent generator through ML intent distribution.
The ML Template examples provide by the ML template provisioning engine (182) includes but are not limited to:
The service manager (184a) is configured to receive the request for traffic classifier from the operator based on the trigger by a group for different network elements/cells/network circle. The seasonality check engine (184b) is configured to check seasonality for each slice or service. The seasonality check engine (184b) analyses the data from the group for different network elements/cells/network circle having same behaviour or same traffic patterns and pass the seasonality check results to the classifier (184c).
The classifier (184c) is configured to bundle the same seasonality nodes and pass instructions to perform training for only one such network element/cell/network circle. Further, the classifier (184c) sends instructions to the ML orchestrator (190) to use the same training models for the rest of the network element/cell/network circle in the group. Therefore, the use of the same training helps models for the rest of the network element/cell/network circle saves the ML resources used for training.
The ML service provisioning engine connector (184d) is a connector device and is configured to pass on the results from the network traffic classifier (184) to the intelligent ML service provisioning engine (186).
The service management connector (186a) is a connector which is configured to communicate with the NMS to obtain service/slice specific and operator specific configurations. The ML template provisioning engine connector (186c) is a connector device which is configured to communicate with the ML template provisioning engine (182) to obtain the available templates for ML service. The ML orchestrator connector (186d) is a connector which is configured to communicate with the ML orchestrator (190) to trigger deployment of the ML pipeline based on the ML intent. The end-to-end processing engine (186b) is configured to process the end to end flow for generating the Ml service for the requested slice/service.
At operation 1 (refer to
At operation 2 (refer to
At operation 3 (refer to
At operation 4 (refer to
The intelligent ML service provisioning engine (186) then generates the ML service deployment plan based on these activities and send the ML service deployment plan to the AI server (190) via ML orchestrator (192).
Generally, the LSTM performs well in majority of the cases and operator might apply LSTM for all the cells. However, in few cases CNN performs well which can compromise QoS or OPEX for specific cells.
Referring to
Referring to
Referring to
At operation 3, the NMS requests the intelligent ML service provisioning engine (186) to deploy the ML pipeline by providing slice type as VR slice and slice id over REST message based interface. At operation 4, the NMS provides the service profile of the VR slice over REST message based interface, based on the slice type and id. The typical service profile of the URLLC may contain for example but not limited to availability: 99.9%, supported device velocity: 2 km/h, slice quality of service parameters (5QI): 82 and coverageAreaTAList:: List of Tracking area where slice is deployed (to help the intelligent ML service provisioning engine (186) to identify the near cell).
At operation 5, on receiving the service profile, the intelligent ML service provisioning engine (186) requests for the current ML resource configuration for
URLLC service and current ML usage of the operator from the NMS.
At operation 6, the NMS sends the requested information over REST based interface to the intelligent ML service provisioning engine (182). The shared information may look like the following:
At operation 7, the intelligent ML service provisioning engine (186) requests the network traffic classifier (184) to check the seasonality of the cells covered in the service to determine the network traffic patterns. The intelligent ML service provisioning engine (186) shares some of the information required for the test which are already received as the service profile and the ML resource usage such as in our considered example the intelligent ML service provisioning engine (186) can share the KPI list, tracking area list, the ML resource utilization allowance and the allowed prediction latency.
At operation 8, the network traffic classifier (184) will perform the seasonality check and find groups of cells with similar seasonality. The seasonality information helps the ML orchestrator (192) to deploy only a single instance of training for each cell group instead of performing training for each and every cell. At operation 9, the network traffic classifier (184) provides the requested information back to the intelligent ML service provisioning engine (186) over REST message based interface which may look like the following in case of VR:
List of Cell Groups
At operation 10, the intelligent ML service provisioning engine (186) requests for the appropriate template from the ML Template provisioning engine (182) and operation 11 the intelligent ML service provisioning engine (186) receives the appropriate ML template from the ML Template provisioning engine (182).
At operation 12, based on the Service profile, the ML resource utilization and the network traffic pattern classification engine inputs, a further learning is performed by ML Intent Generator engine using the learning models such as the reinforcement learning or the dynamic deep learning.
At operation 13, after learning the appropriate ML Intent will be passed on to the ML orchestrator (192) in the AI server (190) as provided below:
Optimized ML Provisioning::
1. ML resource locations: Training: AI server; Predictions: Near-RT RIC
2. List of cell groups
3. ML prediction periodicity: 1 second
4. ML training and prediction accuracies: 99%
5. Pause ongoing ML training for cell groups: CellgroupID 3: {cell 15, 17}: Service: EMBB
At operation 14, the AI server (190) trains and predicts based on the output of the intelligent ML service provisioning engine (186) and send the optimized mitigation solution to the slice manager.
At operation 904, the MaaS for NaaS is provided as part of LSM with intelligent AI solutions
At operation 906, the solution is provided as part of O-RAN solution and co-exist with the AI server (190) as provided in operation 902 and can interact with Non-RT RIC or Near-RT RIC to further optimize the AI solutions.
While the disclosure has been illustrated and described with reference to various example embodiments, it will be understood that the various example embodiments are intended to be illustrative, not limiting. It will be further understood by those skilled in the art that various changes in form and detail may be made without departing from the true spirit and full scope of the disclosure, including the appended claims and their equivalents. It will also be understood that any of the embodiment(s) described herein may be used in conjunction with any other embodiment(s) described herein.
Number | Date | Country | Kind |
---|---|---|---|
202141034308 | Jul 2021 | IN | national |
2021 41034308 | Jun 2022 | IN | national |
This application is a continuation of International Application No. PCT/KR2022/009694, filed on Jul. 5, 2022, in the Korean Intellectual Property Receiving Office and claiming priority to Indian Provisional patent application number 202141034308, filed on Jul. 30, 2021, in the Indian Patent Office, and to Indian Complete patent application number 202141034308, filed on Jun. 15, 2022, in the Indian Patent Office, the disclosures of each of which are incorporated by reference herein in their entireties.
Number | Date | Country | |
---|---|---|---|
Parent | PCT/KR2022/009694 | Jul 2022 | US |
Child | 17863576 | US |